A very, very fast logistics model, and its many applications
Our logistics model is 33x faster than open-source methods for solving real-world problems in transport, infrastructure, and supply chain optimisation.
This interactive demo poses an example problem, given a logistics network and two precursor goods X and Y, how should we distribute the production of goods X and Y to maximise the connectivity between the two kinds of goods?
Why is this hard?
Such problems are instances of what is known as combinatorial optimisation. The general shape of a combinatorial optimisation problem is that:
- one has very many possible options
- one can always compare two options quickly to tell which is better
- one seeks the "best" option
Because there are so many options, usually in practice there is no better option than so-called "brute-force search": just go through all of the options one by one, and keep track of the best one found so far. The catch is that the number of options grows very quickly with the size of the problem. For example, try the following game: select nodes to activate as energy sources so that the total amount of energy flowing from active to inactive nodes is maximised.
Loading...
For even modestly-sized toy problems, brute-force search takes longer than human lifespans on modern computers. So, in practice, to solve these problems within reasonable timeframes, we settle for "good enough" solutions, and we adopt heuristics to find solutions, where heuristics are problem-specific rules of thumb that guide the search process, forgoing evaluation of all options in favour of looking at options that "seem more promising" with respect to the problem at hand.
Our Approach
At Zefram, we have combined insights from Deep Learning and Quantum Computing to develop a new class of solver models that systematically achieve better solutions faster. Our speedup gains yield a qualitative performance improvement: problems that were previously out of reach of classical methods are now amenable to optimising, promising new opportunities in domains such as:
Last-mile delivery and vehicle routing: a 50-stop route has more orderings than atoms in the universe, yet drivers need answers in seconds.
Network infrastructure design where telecom and energy companies must decide which nodes to build, upgrade, or decommission while maintaining coverage and redundancy guarantees, and inefficiencies accumulate costs.
Warehouse and fulfillment optimization: inventory placement, pick-path routing, and labour scheduling interact in ways that defeat decomposition approaches. Supply chain resilience requires multi-tier sourcing decisions, and disruption scenarios require re-solving massive allocation problems faster than the world changes.
Portfolio construction under real constraints: "hold exactly 50 stocks" or "no position under 2%". Rebalancing windows are tight, and stale solutions are costly.
Power grid unit commitment: which generators to activate each hour to meet demand at minimum cost, respecting ramp rates and reserve requirements. Grid operators solve this daily; it takes hours and leaves money on the table.
Operating room scheduling: which surgeries in which rooms with which teams, satisfying equipment, skill, and emergency-slack constraints. Many lives are lost annually to underutilization from brittle schedules.
Sensor placement and coverage: position k sensors to maximize detection probability over terrain under budget constraints. The same math applies to cell tower siting, surveillance networks, and disaster response.
The future
The benefits of the scaling-era are currently bottlenecked by narrowness of application: large foundation models are increasingly powerful, but there are real world problems in allocation, scheduling, and routing that remain underserved, and for no good reason. We are working to change that.