Our Microscaling Engine provides automation, resilience and efficiency for microservice architectures. You can use our Microscaling-in-a-Box site to experiment with microscaling. Or visit microscaling.com to find out more about our product and Microscaling Systems.
This project is no longer being developed. As an alternative we recommend you take a look at Keda.
Go 1.6 & 1.7
Microscaling Engine is under development, so we're not making any promises about forward compatibility, and we wouldn't advise running it on production machines yet. But if you're keen to get it into production we'd love to hear from you.
Microscaling Engine will integrate with all the popular container schedulers. Currently we support
- Docker API
Support for more schedulers is coming soon. Let us know if there is a particular scheduler you wish us to support.
Currently we support scaling a queue to maintain a target length. Support for more metrics is coming soon.
2 queue scaling algorithms are available.
- SimpleQueue - scales containers up or down by one according to whether the queue is too long or too short.
- Queue - uses control theory to prevent oscillation.
- SQS - blog post with more details coming soon.
- NSQ - see this blog post for more details.
- Azure storage queues - this blog post describes using the Azure queue as the metric while running microscaled tasks on DC/OS.
Support for more message queues is coming soon. Let us know if there is a particular queue you wish us to integrate with.
Running with label-based config
Get scaling parameters from your image metadata by configuring them with the following labels:
Download the compose file and add the following environment variable to the environment settings for the microscaling image:
Building from source
If you want to build and run your own version locally:
- Clone this repo
- Build your own version of the Docker image
DOCKER_IMAGE=<your-image> make build
-it <your-image>instead of
docker runso that it picks up your version of the image
Microscaling Engine is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.
We'd love to get contributions from you! Please see CONTRIBUTING.md for more details.
Microscaling is a package that monitors demand for resource in a system and then scales and repurposes containers, based on agreed "quality of service" contracts, to best handle that demand within the constraints of your existing VM or physical infrastructure (for v1).
Microscaling is defined to optimize the use of existing physical and VM resources instantly. VMs cannot be scaled in real time (it takes several minutes) and new physical machines take even longer. However, containers can be started or stopped at sub-second speeds, allowing your infrastructure to adapt itself in real time to meet system demands.
Microscaling is aimed at effectively using the resources you have right now - your existing VMs or physical servers - by using them as optimally as possible.
The microscaling approach is analogous to the way that a router dynamically optimises the use of a physical network. A router is limited by the capacity of the lines physically connected to it. Adding additional capacity is a physical process and takes time. Routers therefore make decisions in real time about which packets will be prioritized on a particular line based on the packet's priority (defined by a "quality of service" contract).
For example, at times of high bandwidth usage a router might prioritize VOIP traffic over web browsing in real time.
Containers allow microscaling to make similar "instant" judgements on service prioritisation within your existing infrastructure. Routers make very simplistic judgments because they have limited time and cpu and they act at a per packet level. Microscaling has the capability of making far more sophisticated judgements, although even fairly simple ones will still provide a significant new service.
Package api defines API between Microscaling agent and server
|Package api defines API between Microscaling agent and server|
Package demand defines Tasks
|Package demand defines Tasks|
Package engine defines engines that calculate (or retrieve) what the demand for each task is
|Package engine defines engines that calculate (or retrieve) what the demand for each task is|
Package metric defines things that we measure to determine how well a task is performing Package metric implements the queue metric for NSQ (http://nsq.io/).
|Package metric defines things that we measure to determine how well a task is performing Package metric implements the queue metric for NSQ (http://nsq.io/).|
Package monitor defines monitors, where we send updates about tasks and performance
|Package monitor defines monitors, where we send updates about tasks and performance|
Package scheduler defines the interface with schedulers & orchestration systems
|Package scheduler defines the interface with schedulers & orchestration systems|
Package docker integrates with the Docker Remote API https://docs.docker.com/reference/api/docker_remote_api_v1.20/
|Package docker integrates with the Docker Remote API https://docs.docker.com/reference/api/docker_remote_api_v1.20/|
Package kubernetes provides a scheduler using the Kubernetes API.
|Package kubernetes provides a scheduler using the Kubernetes API.|
Package marathon provides a scheduler using the Marathon REST API.
|Package marathon provides a scheduler using the Marathon REST API.|
Package toy is a mock scheduling output that simply reflects back whatever we tell it
|Package toy is a mock scheduling output that simply reflects back whatever we tell it|
Package utils contains common shared code.
|Package utils contains common shared code.|