Gen AI Program: Mastering Large Language Models

Master Transformer Architecture. Advanced Prompt Engineering Techniques. Prompt Build RAG & Agents w/ LangChain. Fine-Tune Embeddings & LLMs with PEFT using QLORA. Deploy LLMs to Prod in the Cloud .


6 WEEKS. COHORT-BASED COURSE

499 € TTC

Next Cohort: March 31 - May 12.

About the training course

👋 Embark on Your Next Exciting Journey in Generative AI with hands-on enterprise projects to gain real-world experience!!

🚀 Join our AI community and start building, fine-tuning, deploying, and sharing your innovations with us!

đŸ€© Each cohort concludes with a thrilling live Demo Day experience, showcasing your projects.

đŸ§‘â€đŸ’» You will master the art of prototyping and deploying LLM applications, focusing on key principles: Prompt Engineering, Retrieval Augmented Generation (RAG), Fine-Tuning (of LLMs and embedding models), and building sophisticated Agents.

đŸ§‘â€đŸ€â€đŸ§‘ Get hands-on experience in building and refining complex LLM applications. Weekly sessions provide constant feedback through live interactions, peer support, and instructor guidance.

📇 Dive deep into RAG systems. Learn to create robust applications from quick prototypes with OpenAI GPT models to scalable production-ready solutions using the latest open-source LLMs and embedding models.

🔧 Fine-Tune LLMs and Embeddings: Master advanced techniques for fine-tuning pre-trained models and embedding models to optimize performance for specific tasks and applications.

☁ Gain proficiency in cloud deployment and improve LLM inference and serving with AWQ and vLLM.

  • Special Feature: During the 6-week course, you'll have the unique opportunity to engage with two internationally renowned AI experts, including an AI Research Scientist at Microsoft, to discuss the latest trends and developments in LLMs.


What you’ll get out of this course

  • 🔗 Transformer Building Blocks
    Master the fundamental components of Transformer models, including self-attention, multi-head attention, positional encoding, and feed-forward networks. Understand how they power modern LLMs.
  • đŸ§© Prompt Engineering

Harness in-context learning for prototyping, RAG applications, agents, and more.

  • 📚 Retrieval Augmented Generation (RAG)

Create RAG applications to ground LLM outputs in your own reference data.

  • đŸ€– Agents

Develop complex LLM applications capable of reasoning, taking action, and using external tools.

  • 🔧 Embedding Fine-Tuning

Customize open-source embedding models to fit your domain-specific language or data.

  • 🎯 LLM Fine-Tuning

Train LLMs for specific tasks using advanced techniques like PEFT-LoRA and quantization.

  • 🚀 Inference & Serving

Achieve efficient and scalable inference with the right tools and techniques.

  • 🎉 Demo Day!

Present your unique project live to a cohort of peers and the public!


Who is this course for



  • 🧑‍🔬 Data Scientists, Data Analysts, and Machine Learning Engineers:

Enhance your skills in building advanced LLM applications.

  • đŸ’» Software Engineers:

Learn to develop and implement large language model (LLM) applications.

  • 📊 Professionals with a Technical Background (Actuaries, Finance, and More)

Leverage your analytical skills to apply LLMs in data-driven industries such as finance, risk assessment, and business intelligence

Prerequisites



  • Intermediate knowledge of Python, including Pandas and NumPy.


  • Basic fundamentals of machine learning and deep learning



This course includes



  • đŸ“ș Interactive live sessions
  • 📚 Lifetime access to course materials
  • 📖 Severals in-depth lessons
  • đŸ‘šâ€đŸ« Direct access to instructor
  • đŸ› ïž 10+ hands-on enterprise projects to gain real-world experience
  • 🔄 Guided feedback & reflection
  • 🌐 Private community of peers

Course syllabus


  • Week 1. Introduction to GenAI & LLMs
  • Week 2. Mastering Transformer Architecture
  • Week 3. Advanced Prompt Engineering: CoT, Self-Consistency, SCRIBE
  • Week 4. Building Production Ready RAG, Agents systems using Langchain
  • Week 5. Fine-tuning LLMs and embedding models
  • Week 6. Inference and LLM Serving

ABOUT YOUR TRAINER

About me

Passionate about innovation and artificial intelligence, I hold a Ph.D. in Statistics with a specialization in sequential data (time series). My academic journey also led me to teach machine learning courses at the university level, where I shared my passion and expertise with numerous students.

As a Senior Data Scientist, I have developed question-answering systems using language models (LLM) and knowledge graphs to generate natural language responses, as well as RAG systems for interaction with private data sources. I have successfully led various data science projects, encompassing a range of use cases, including sequential data and image analysis, across industries such as insurance, manufacturing, and automotive.

My approach combines scientific rigor with practical application, from designing to deploying scalable machine learning solutions. This course is a culmination of my experience and dedication to advancing the field of Generative AI, aiming to equip participants with the skills and knowledge to excel in this rapidly evolving domain.