Boosting performance is a priority in the AI space. Businesses will not hesitate to make large investments to achieve even a marginal improvement (e.g. increased accuracy, speed) in AI systems. A range of factors impact the performance of AI; high-quality data, infrastructure, code, and the right talent are all crucial elements of a thriving AI ecosystem.
In order to achieve optimal performance, an AI system must in essence maximise results while minimising inefficiencies and costs. In this article, we discuss key approaches to AI optimisation, including an overview of innovations developed by TurinTech.
Factors underlying optimal AI performance
AI performance relies on four key pillars: (1) data, (2) infrastructure, (3)code, and (4) talent. In order to achieve optimal AI performance, each of these component elements needs to be optimised.
Data: Good datasets are essential in training, testing, and deploying AI systems
AI systems and machine learning models are built, trained, tested, and deployed based on data. Incomplete, invalid, or corrupt data compromises the quality of AI systems. Training and test data used in an AI system must also be a representative sample of the population. Non-representative datasets pose the risk of introducing biases to AI systems.
Selecting a good dataset and data preprocessing are essential first steps in ensuring strong AI performance. Read our blog Data Quality in Machine Learning: How to Evaluate and Improve? to find more information on data quality, particularly when building machine learning models.
Infrastructure: Robust technology enables powerful performance
Infrastructure and hardware make it possible to host data, models, and software needed for AI solutions. Processing, memory, networking, and storage tasks are particularly enabled via infrastructure. Processing and logic devices such as CPUs, GPUs, FPGAs and/or ASICs, temporary storage and long-term storage solutions, and devices that enable connectivity make up the essential hardware bundle for AI.1 Improving the performance of AI systems is inevitably tied to hardware capacity.
However, it is doubtful that the capacity of hardware can be increased exponentially without innovative interventions. For instance, following Moore’s law2, there are concerns that we may be reaching the limits of the physical capacity of computing power3,4. In order to power the AI of the future, bleeding-edge infrastructure solutions will be essential.
Figure 1. ML Infrastructure Source: A16z
Code: Clear, manageable, and efficient code streamlines AI implementation
Code that underlies AI systems is crucial to ensure that solutions function as expected, and do so without error. Good code is reliable, clear, and consistent. Availability of documentation makes code bases easier to maintain. The ever-evolving AI ecosystem requires high-performance software, making code efficiency critical. Even the slightest increase in code efficiency can accelerate application speed and boost productivity and profits.
A study on the business impact of code quality reported that the Time-in-Development for Alert level 1 codebases is 124% more than that of Healthy level code5. In a survey of C-level executives conducted by Stripe, it was reported that bad code costs companies $85 billion annually6. Technical debt in machine learning7 piles extremely fast that bad quality code can set even the most experienced teams back for half a year.
Talent: High-quality talent develops the best-performing AI systems
AI is envisioned, developed and maintained by data scientists, software engineers and various other personnel in the AI ecosystem. Engaging the best talent leads to AI solutions that are innovative, efficient, and profitable. LinkedIn 2020 Emerging Jobs Report UK identifies Artificial Intelligence Specialist as UK’s top emerging job, highlighting the value and demand for AI talent.8
However, due to the time and training required for specialisation, it can often be difficult to find and recruit AI specialists. Organisations may feel the need to curtail the scope of their AI applications if they are unable to find the right talent for the task.
Despite being essential in the AI ecosystem, each of these elements presents its own set of challenges and limitations. Innovative solutions are required to ensure that AI can be scaled without being hindered by complications. Code optimisation is one area with a promising outlook for achieving optimal AI performance.
The benefits of code optimisation are manifold. Optimisation makes code bases cleaner, clearer, and more consistent, making the software perform with greater efficiency. End-users of this software receive an enhanced user experience with better results and increased speed. Code optimisation also makes code more readable, ultimately making it easier for multiple stakeholders to work and collaborate easily on a single code base. Contrary to hardware acceleration, focusing on accelerating software (code) can also lead to greater returns to scale. An analysis by Intel estimates that “even a 10X gain in performance through software AI accelerators can lead to approximate cost savings of millions of dollars a month”.9
Figure 2. Code optimisation boosts performance without extra hardware costs
Value of code optimisation in achieving optimal AI performance
In essence, code optimisation is writing or rewriting code in such a way that it utilises minimal time, energy, and computing resources to be executed. However, code optimisation often receives mixed responses in software development. On the one hand, optimisation can dramatically boost programme performance. On the other hand, efforts at code optimisation, particularly manual code optimisation, can consume an unsustainable amount of developer time and resources. Sub-par optimisation can also lead to poor programme performance. Given the value of code optimisation, particularly in industrial production environments, there is an increasing need to develop tools and technologies that can optimise code while keeping disadvantages at a minimum.
Understanding its value, TurinTech has been extensively researching code optimisation and its implementations in the AI ecosystem for over 10 years. One of our key research applications is evoML, a platform that accelerates code optimisation. evoML works on users’ code bases to automatically detect and reduce inefficiencies. The technology is based on genetic algorithms, where code goes through a process of evolution until an optimal solution is reached. evoML also encapsulates multi-objective optimisation, allowing businesses to optimise parameters that they specifically wish to address.
Figure 3. Code optimisation by evoML platform
Our research showed that there are significant benefits in using automated code optimisation to improve the quality of code. In one of our studies, we saw optimisation resulting in up to 46% improvement in execution time, 44.9% improvement in memory consumption and 49.7% improvement in CPU usage.10 Our proprietary code optimisation is able to improve the performance of code bases significantly, at a fraction of the cost. Reducing execution time and the computational cost of running programmes also means making considerable savings on finances, resources, and energy usage.
However, as we have mentioned earlier, code is only a part of the AI ecosystem. evoML is an end-to-end solution that brings the entire data science pipeline onto a single platform together with code optimisation. With evoML, users are able to build and optimise machine learning models with production-quality code at a fraction of the time, effort, and cost of the conventional model-building process.
About the Author
Malithi Alahapperuma | TurinTech Technical Writer
Researcher, writer and teacher. Curious about the things that happen at the intersection of technology and the humanities. Enjoys reading, cooking, and exploring new cities.
1 Artificial-intelligence hardware: New opportunities for semiconductor companies
2 The Future of Computing Performance: Game Over or Next Level?
3 We’re approaching the limits of computer power – we need new programmers now
4 We’re not prepared for the end of Moore’s Law
7 Technical Debt in Machine Learning
8 2020 Emerging Jobs Report | UK