This project will no longer be maintained by Intel.
Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.
Intel no longer accepts patches to this project.
If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.
Modern End to End AI pipeline life cycle is quite complicate with a complex pipeline including data processing, feature engineering, model development, and model deployment & maintenance. The iterative nature for feature engineering, model testing and hyper-parameter optimization makes the process more time-consuming. This complexity creates an entry-barrier for novice and citizen data scientists who might not have such expertise or skills. Meanwhile, people tend to develop larger and larger models to get better performance, which are quite often over-parameterized. Those overparameterized models not only poses significant challenges on AI hardware infrastructure as they require expensive computation power for training, but also posed a challenge when try to deploy in resource constraint environment which is a common need.
Intel® End-to-End AI Optimization Kit is a composable toolkits for E2E AI optimization to deliver high performance lightweight networks/models efficiently on commodity HW like CPU, intending to make E2E AI pipelines faster, easier and more accessible.
Making AI Faster: It reduces E2E time on CPU to an acceptable range throughput full pipeline optimization and improved scale-up/out capability on Intel platforms with Intel optimized framework and toolkits, delivers popular lighter DL Models with close enough performance and significantly higher inference throughput.
Making AI Easier: It automates provides simplified toolkits for data processing, distributed training, and compact neural network construction, automates E2E AI pipeline with click to run workflows and can be easily plugged to third party ML solutions/platforms as an independent composable component.
Making AI more accessible: Through built-in optimized, parameterized models generated by smart democratization advisor and domain-specific, neural architected search (NAS) based network constructure, it brings complex DL to commodity HW, everyone can easily access AI on existing CPU clusters without the need to be an expert on data engineering and data science.
This solution is intended for citizen data scientists, enterprise users, independent software vendor and partial of cloud service provider.
Intel® End-to-End AI Optimization Kit is a composable toolkits for E2E AI optimization to deliver high performance lightweight networks/models efficiently on commodity HW. It is a pipeline framework that streamlines AI optimization technologies in each stage of E2E AI pipeline, including data processing, feature engineering, training, hyper-parameter tunning, and inference. Intel® End-to-End AI Optimization Kit delivers high performance, lightweight models efficiently on commodity hardware.
-
RecDP: An one stop toolkit for AI data process. This toolkit provides LLM data processing and Machine Learning Feature Engineering lib in scalable fashion on top of Ray and Spark. It provides simple to use API for data scientists, delivers optimized performance, and can be easily integrated to third party solutions.
- Auto Feature Engineering: Provides an automatical way to generate new features for any tabular dataset which containing numericals, categoricals and text features. It only takes 3 lines of codes to automatically enrich features based on data analysis, statistics, clustering and multi-feature interacting.
- LLM Data Preparation. Provides a parallelled easy-to-use data pipeline for LLM data processing. It supports multiple data source such as jsonlines, pdfs, images, audio/vides. Users will be able to perform data extraction, deduplication(near dedup, rouge, exact), splitting, special_character fixing, types of filtering(length, perplexity, profanity, etc), quality analysis(diversity, GPT3 quality, toxicity, perplexity, etc). This tool also support to save output as jsonlines, parquets, or insertion into VectorStores(FaissStore, ChromaStore, ElasticSearchStore).
-
Smart Democratization Advisor (SDA): A user-guided tool to facilitate automation of built-in model democratization via parameterized models, it generates yaml files based on user choice, provided build-in intelligence through parameterized models and leverage SigOpt for HPO. SDA converts the manual model tuning and optimization to assisted autoML and autoHPO. SDA provides a list of build-in optimized models ranging from RecSys, CV, NLP, ASR and RL.
-
Neural Network Constructor: A neural architecture search technology and transfer learning based component to build compact neural network models for specific domains directly. It includes three componments,
- DE-NAS: It is a multi-model, hardware aware, train-free neural architecture search approach to build models for CV, NLP, ASR directly.
- Model Adapter: It leverages transfer learning model adaptor to deploy the models in user’s production environment.
- Deltatuner: It extends the Parameter-Efficient Fine-Tuning (PEFT) with automatically constructing compact delta structures.
For more information, you may read the docs.
-
To install all components:
- To install e2eAIOK in baremetal environment, use
pip install e2eAIOK
- To install latest nightly build, use
pip install e2eAIOK --pre
- To install e2eAIOK in baremetal environment, use
-
To install each individual component:
- To install SDA, use
pip install e2eAIOK-sda
- To install DE-NAS, use
pip install e2eAIOK-denas
- To install Model Adapter, use
pip install e2eAIOK-ModelAdapter
- To install Deltatuner, use
pip install e2eAIOK-deltatuner
- To install SDA, use
git clone https://github.com/intel/e2eAIOK.git
cd e2eAIOK
git submodule update --init --recursive
python scripts/start_e2eaiok_docker.py --backend [tensorflow, pytorch, pytorch112] --dataset_path ../ --workers host1, host2, host3, host4 --proxy "http://addr:ip"
Intel® End-to-End AI Optimization Kit provides step-by-step demos. Once completed installtion, please refer to the Demo section to use the click-to-run notebooks on colab or get familar with the APIs of each individual componment for a specific workload.
-
Built-in Models
-
Neural network constructor
-
DE-NAS demos:
- DE-NAS Overview
- CNN - Computer Vision, PyTorch
- ViT - Computer Vision, PyTorch
- BERT - NLP, PyTorch
- ASR - Speech Recognition, PyTorch
- BERT Huggingface - Hugging Face models, PyTorch
- DE-NAS Overview
-
Model Aadapter demos
- Model Adapter Overview
- Finetuner - Computer Vision, Image Classification, ResNet50, PyTorch
- Distiller - Computer Vision, Image Classification, ResNet18, PyTorch
- Domain Adapter - Computer Vision, Medical Segmentation, 3D Unet, PyTorch
- Model Adapter Overview
-
- E2E RecSys Performance - DLRM, DIEN, WnD
- SDA Model Performance - ResNet, BERT, RNN-T, MiniGo
- DE-NAS Performance - CNN, ViT, BERT, ASR
- The Parallel Universe Magazine - Accelerate AI Pipelines with New End-to-End AI Kit
- Multi-Model, Hardware-Aware Train-Free Neural Architecture Search
- SigOpt Blog - Enhance Multi-Model Hardware-Aware Train-Free NAS with SigOpt
- The Intel® SIHG4SR Solution for the ACM RecSys Challenge 2022
- ACM - SIHG4SR: Side Information Heterogeneous Graph for Session Recommender
- ICYMI – SigOpt Summit Recap Democratizing End-to-End Recommendation Systems
- The SigOpt Intelligent Experimentation Platform
- SDC2022 - Data Platform for End-to-end AI Democratization
- “Model Adapter”: Enhance Your AI Pipeline with Efficient Knowledge Transfer