Why doesn't the REST API use HTTP headers and return code like everyone else ?
The ISO Model defines the different layers required to obtain communication between parties. The model is divided in 7 layers from hardware to application. The entire principle is that higher order layers should not care about lower ones and consider that it is being taken care of. Hence, when you send an email using SMTP, you do not need to know how to calibrate the electric pulse of the wireless connection.
Why only 7 layers ?
Because this is a normative reference. Thereof, everything else is beyond (or encapsulated in) the 7th layer and thus considered application-specific which is outside the scope of the norm. The HTTP transport protocol is considered as layer 7.
But HTTP is just a transport layer !
Indeed, you can exchange many different types of things using HTTP, starting with HTML. And indeed, according to the encapsulation principle of the ISO model, the HTML page you see does not care about whether or not you have some HTTP-specific things going on. The article you read will be the same regardless of the compression algorithm used by the server to transfer HTTP packets.
REST API is layer 8
Therefore, we consider that the REST API lies in an 8th layer on top of HTTP. And according to the ISO encapsulation, we should not rely on a lower level specifics: the HTTP headers, the HTTP response code or the HTTP encoding,...
The REST API speaks JSON and this is what matters.
But the request verb (GET, POST) specifies what you want to do !
Many APIs are exposing low-level atomic endpoints such as "create an entity", "list elements", "modify something",...
Although it might be useful, a real business operation may require to perform dozens of such low-level operations. A more valuable API would
expose a single endpoint to perform all those tasks at once, i.e.: Update the employee contract, send a notification to HR, create a new history record and return a downloadable signed document.
Is this a GET, a POST, a PUT... ? Not really either of those ! So instead of trying to place an awkward HTTP request verb on top of it, lets just consider that the actual content is of significance, not the
envelope around it.
But the return code gives a quick hint about the result !
Indeed, the return code will tell you if the HTTP request did succeed (code 200) or failed (i.e. error 500). But "what" did ?
If you automate a script to send request and just expect a 200 return code and you typed a wrong but valid URL that ends up somewhere else and return a nice picture of a cat... This is return code 200 !
What if you have an API endpoint that is supposed to return an ID, this is return code 200... but you still need to parse the response anyway to get the value you are expecting. The sames goes for errors.
This is why only checking the HTTP return code is not a practice we recommend, in any case. The content matters, not how you get it.
So is the entire world wrong and you are right ?
There is no right or wrong. If ever there is an evolution of the HTTP protocol, the Busit REST API will remain untouched whereas other implementations will most probably need deep adjustments to make it compliant to another version... which may later change again...
Why use MD5 whereas it has been proven to be a compromized algorithm ?
The MD5 hashing algorithm is used to issue API tokens. This is an educated choice for the purposes it serves. You may have read or heard that MD5 is compromized or weak or else, which is entirely true. However, this does not remove the intresic propreties of the algorithm per-se that we are interrested in:
But why is it compromized then ?
Some researchers have found a way to produce very specific collision attacks which means that they were able to forge two different binary files that give the same hash output.
How does one find your password based on the hash ?
There are two ways to find the password based on a hash. Those are the same two methods for any hashing algorithm:
Are you saying MD5 is safe ?
No. Security is a serious topic, and we take it very seriously. However, lets not confuse what we want to acheive and how to acheive it. Some people will havoc MD5 because it is flawed. We rather use it for what it is good at.
Why did you reinvent the wheel with your own JSON parser ?
The short answer is because we are actually using our platform ourselves and we find it much more convenient that way.
The long answer is a matter of where you need JSON and how fast, robust or convenient you want it to be. We created a custom JSON parser to read data, but writing JSON is done in a strict conventionnal manner. That way, we send valid JSON to external sources, but if we receive somewhat malformed input, we try our best to understand it.
The platform is using different programming languages at different levels. The API is written in Java, the frontend is using Javascript, and BusApps may use PHP. All those do support JSON, and all those also have a different natural way to write data.
// JAVA
Map<String, Object> = new HashMap<String, Object>() { {"key1", "value"}, {"key2", 42} };
List<Object> = new ArrayList<Object>() { "value", 42 };
// JAVASCRIPT
var map = { 'key1': "value", 'key2': 42 };
var list = [ "value", 42 ];
// PHP
$map = [ "key1" => 'value', "key2" => 42 ];
$list = [ 'value', 42 ];
Some languages are more strict while some other are more loose. For instance, Javascript does not care about a single trailing coma while PHP is very strict about it. But in the end, this is all about conforming human developers to yet another syntax. So we turned things around and asked ourselves about whether or not the language should adapt to the developer instead !
In the three example above, -- provided that you are a developer that knows about JSON --, you were certainly able to understand what the code meant even if you
did not know that particular programming language. As a developer again, you most probably encountered this type of error:
Missing semicolon ';' on line 42 at character 666.
or something like
Found ')' on line 42 at character 666 but expecting '}'.
Then you most probably thought out loud "But do it then if you know it, you #@$*"
Well, this is exactly what we do. If 2 different human can understand what was meant in the JSON, so should the parser. Knowing JSON, can you guess what was meant here ?
{ 'key1': "value", key2: 42 } { "key1" => "value", "key2" 42
And can your parser tell what this meant ?[ 42: "123,key' value; [ '}', value-:
Well, can you ? The goal is not to be challenged on the most complicated scenarios just to make the point. The goal is to simplify developer's life by accepting common mistakes. Some languages accept single or double quotes, we do. Some languages accept trailing coma or missing semicolon, we do. Some languages use colon or arrows, we do. And so on.
And what about performance ?
There are numerous aspects on performance that should be taken into account. Decoding speed, CPU hunger, memory footprint, warmup phases,... and those do not apply equally for huge JSON as for tiny JSON. So considering that the main purpose of our JSON parser is to understand the BusApp config file, this is about a few kilobytes a few times a day, the fastest library out there takes about 2.5ms to process the file, and our parser is about 100% slower with a very bad 5ms. This means that we are spoiling 6ms of processing everyday. In this case, the tradeoff is very clear and this is a sacrifice on performance we are willing to make.
So you pretend your parser is better ?
The notion of good or bad is subjective. Our parser fit our needs, is easy to use and holds in a single code file (compared to 750 files for the Jackson library). Did we really need it ? As we said first, we are the first users of our plaform, we are human, and we do mistakes. So our parser saves us hours of debugging every day.
Why do you use implicit OAuth without refresh tokens ?
The entire OAuth process is about trust between parties: the User, Busit and Third party applications. The goal is that a Third party application gains access to a limited set of User's restricted resources on Busit without knowing the User's credentials. Therefore, Busit should trust the Third party for requesting a token. Then, the User should trust Busit for delivering only what is necessary while preserving his privacy and his credentials secure. Last, the User should trust the Third party to act on his behalf.
Implicit (1-step) or Authorization code (2-step) ?
When using the 2-step approach, the Third party first gets an authorization code (first step) that it should use to obtain an access token (second step). When using the 1-step approach, the Third party directly gets an access token. This is called the Implicit code flow and described in RFC6749 in chapter 1.3.2.
What is the difference ?
It is a matter of trust. Using the implicit code flow, if the user-agent (browser) of the User is infected with some virus, then it could intercept the access token. This is a very serious issue ! But meanwhile, if the browser is infected, it could as well just intercept the user credentials (username and password) in the first place. So, either you trust the User and you don't need those extra steps, or you don't trust the User and you should not allow access at all.
Why no refresh tokens ?
The refresh token is typically delivered along the access token. If the access token expires, then the refresh token can be used to get a new one. Meanwhile, this mecanism is optional as described in RFC6749 in chapter 1.5. The reason for it is once again a matter of trust. If the access token is discovered by some attacker, it will eventually expire and the official Third party can get a new access token using the refresh token. Then the attacker can no longer use the stolen token. This is a very important principle ! However, how did an attacker get the access token in the first place, what prevents him from stealing the new token, and why would he hack the access token and not the refresh token itself ? What if the attacker steals the refresh token and asks for a new token such that the official Third party's token is now invalid ? For this reason we do not issue refresh tokens and rather allow Users and Third parties to revoke the token if they believe it has been compromized.
Do you pretend the Authorization code flow (2-step) and refresh tokens are useless ?
No. The OAuth process is very clearly defined in RFC6749 and allows different schemes. We are deriving the documented method that fits best to our needs. We also try to look at the reality of trust between parties without "what if" that make it sound scary but does not truly address the potential issues behind it.