2 min read

Mojo Creates A 4,000x Speed Boost to Programming AI

Machine learning and deep neural network development are the top priority in developers’ minds today. Replicating human thinking and decision-making through AI is the state of the world we’re living in. But the best language we have for designing and training neural networks is not optimized for the speed needed in AI development.

Nearly all AI models today are developed in Python, thanks to the flexible and elegant programming language, fantastic tools and ecosystem, and high performance compiled libraries. But it comes with a downside: performance. Python is many thousands of times slower than languages like C++. However, Python has a trick up its sleeve: it can call out to code written in fast languages. So Python programmers learn to avoid using Python for the implementation of performance-critical sections. – fast.ai

This is a challenge because it requires programmers to learn other languages in order to efficiently test and improve their AI models. However, Modular has introduced a new programming language called Mojo which could remedy this problem.

WTF? 4,000x Boost to Python

Mojo is a new programming language based on Python, which fixes Python’s performance and deployment problems. It allows programmers to reap the benefits of faster programming languages without requiring programmers to utilize another language since it’s based on Python.

The result of programming in Mojo is that it executes code more than 4,000x faster than Python.

How? Mojo is designed to take full advantage of MLIR, which is a compiler infrastructure specifically tailored to the needs of machine learning and AI systems. It’s critical for fully leveraging the power of hardware like GPUs, TPUs, and CPUs.

A compiler is a crucial part of developing software. The purpose of a compiler is to translate code that humans can read and write into code that machines can understand and execute. Considering humans and machines don’t speak the same language, a compiler plays the role of translator.

But just like translating English into Spanish, there’s a lot that goes into translating something. You must account for the words, the grammar, and missing information. The compiler does all of this and then optimizes for efficiency to lessen the computing power of the program. Therefore, the input you give the compiler matters a lot.

In simple terms, Mojo is optimized to communicate with the greatest compiler (translator) we have for designing AI in computers. But what’s so special about Mojo is that it’s made with a familiar programming language (Python) which is used by (a conservative estimate of) 15 million programmers. That’s millions of people who can now confidently and rapidly design machine learning programs without diving into an entirely foreign language.

If that weren’t a good enough reason to take notice, then there’s also the innovator leading this project: Chris Lattner.

Chris has a phenomenal track record. He has pioneered compiler infrastructure many times over, both independently and at Google. He served as VP of Autopilot Software at Tesla. And he also created the Swift programming language, which is the go-to way to create iOS apps for iPhone, iPad, MacOS, and Apple TV.

Chris has deep experience in designing systems that programming rookies and veterans can use and get the most out of. He has massive wins under his belt and seems to have another one brewing in Mojo.

I imagine that Mojo programming classes, workshops, and experts will quickly be in high demand as things continue to trend upward for AI investments.

AI is the future (and present) of everything. Thus, creating better processes for developing AI systems allows for more participants and more efficient progress. It’s like we are driving in first gear and then Mojo shifts us into third gear. It could be the greatest achievement in programming in decades.

To further understand how paramount Mojo is to AI, read a more in-depth analysis of Mojo or watch Mojo in action below: