Apple releases open source MLX framework for efficient machine learning on Apple Silicon

Apple recently released MLX – or ML Explore – the company’s machine learning (ML) framework for Apple Silicon computers. The company’s latest framework is specifically designed to simplify the process of training and running ML models on computers powered by Apple’s M1, M2 and M3 series chips. The company says that the MLX has an integrated memory model. Apple has also demonstrated the use of the framework, which is open source, allowing machine learning enthusiasts to run the framework on their laptop or computer.

According to Details shared by Apple On the code hosting platform GitHub, the MLX framework has a C++ API as well as a Python API that is closely based on numpy, a Python library for scientific computing. According to Apple, users can also take advantage of higher-level packages that enable them to create and run more complex models on their computers.

MLX simplifies the process of training and running ML models on computers – developers were previously forced to rely on a translator to convert and optimize their models (using coreml, It has now been replaced by MLX, which allows users running Apple Silicon computers to train and run their models directly on their devices.

generate mlx mlx apple

Apple shared this image of a large red icon with the text MLX, generated by Stable Diffusion in MLX
Photo Credit: GitHub/Apple

Apple says the design of MLX follows other popular frameworks in use today, including arrayfire, JaxNumPy, and pytorch, The firm has promoted the unified memory model of its framework – MLX arrays reside in shared memory, while performing operations on them without the need to make copies of the data across any device type (currently, Apple supports CPUs and GPUs). But it can be done.

The company has also shared examples of MLX executing tasks like Image generation using static diffusion On Apple Silicon hardware. When generating a batch of images, Apple says MLX is faster than PyTorch for batch sizes of 6,8,12 and 16 – with up to 40 percent more throughput than the latter.

The tests were conducted on a Mac powered by the M2 Ultra chip, the company’s fastest processor to date – MLX is capable of creating 16 images in 90 seconds, while PyTorch would take about 120 seconds to perform the same task, according to the company .

Other examples of MLX include creating text using the open source meta LLAMA language modelsimultaneously mistral large language model, AI and ML researchers can also use OpenAI open source whisper tool To run speech recognition models on your computer using MLX.

The release of Apple’s MLX framework could help ease ML research and development on the company’s hardware, ultimately helping developers bring better tools that can be used for apps and services that interact with users. Offer on-device ML features that run efficiently on computers.


Affiliate links may be automatically generated – see our ethics statement for details.

(TagstoTranslate)Apple Silicon MLX Framework Open Source Intelligent Machine Learning MLX(T)ML Explore(T)Apple(T)Machine Learning(T)ML(T)AI(T)Artificial Intelligence(T)Stable Propagation(T)Llama( t)Mistral

Leave a comment