API Optimization Engine is Now Open Source

API Optimization Engine is now Open Source

AOE is now Open Source

The API Optimization Engine (AOE) is now an open source project. Instead of spending months developing your own infrastructure to speed up your development cycle and your website, we’ve taken care of all of the work for you. Simply install the engine in your cloud, and route all of your API calls though that call server.


API Optimization Engine is Open Source

The API Optimization Engine provides numerous optimization advantages, all of which are described in Rick Mac Gillis’ book, “The New Frontier in Web API Programming.” The software successfully completed its alpha and private beta trials, and now it’s currently in “public beta” status. So, that means that the engine is ready for companies like yours to test drive the software and get familiar with how to use it. It’s not yet considered stable, so don’t use it on your production servers. If you find any bugs in the code, please report them on the project’s GitHub Issue Tracker.

What optimizations does the engine provide?

Call Processing Server – Route your calls through a single server or a cloud of API call processing servers to lighten the load on your webserver. Remember from the book that all TCP requests require at least 9 transactions! The API Optimization engine is the missing code you need to construct a call processing server.

Batch Processing – Batch process your API calls by sending multiple requests to the call processing server to avoid wasting resources with one-off requests.

Parallel Call Processing – The engine uses stream_socket_client() and stream_select() to process your calls in non-blocking mode (parallel) so you only need to wait as long as it takes for the longest call to complete. Remember that in serial the call time stacks, making three 5 second calls become a 15 second call.

RAML Modeling Language – Stop bloating your code with messy SDKs that make your code harder to read. Simply describe the remote API you’re contacting by writing an easy to learn RAML specification.

Don’t Wait for a Response – You don’t need the response from every API you call, so why wait for one? Simply instruct the engine that you do not wish to receive the response, and it will queue the call and reply with a generic message of its own.

Preconfigure Your Requests – Through the use of the aptly named “static calls,” you may preconfigure calls whos request data remains the same. Therefore, you can maintain the request directly in your RAML document in the database while only passing basic data to the call processing server.

Use One Format – Now you can speak to an XML server in JSON, or a JSON server in XML while keeping your projects free of conversion classes, and your code much cleaner.

Caching – The engine supports caching options through the framework, so you can cache the way you want to. Use Redis, Memcached, the file system, or the database. The engine will cache repetitive data, such as nonces, usage data, and static call responses. (You may shut off static call response caching in the configuration options.

I hope you enjoy our new open source project. Bit API Hub is evolving to something much more than an API Optimization company. We’re transforming into a company specializing in artificial intelligence, and we’ll use the API Optimization Engine as the backbone for our own API connectivity… More on that in the upcoming months.

Photo Credit: Pixabay

Rick Mac Gillis

Rick Mac Gillis is the CEO and founder of Bit API Hub.

You may also like...

%d bloggers like this: