最新消息:需要购买可以去xiaocaib.taobao.com网店购买会员 注册登录即可屏蔽广告

Full Stack AI Engineer 2026 – Generative AI & LLMs III

未分类 dsgsd 5浏览 0评论

th_sO69W6gGZ7LoPexn0TRbopXewkeR3Vpx.avif_

Published 1/2026
MP4 | Video: h264, 1920×1080 | Audio: AAC, 44.1 KHz, 2 Ch
Language: English | Duration: 6h 8m | Size: 3.38 GB

Build production-ready generative AI systems using LLMs, RAG, agents, and full-stack engineering practices

What you’ll learn
Design and build production-ready generative AI systems using Large Language Models (LLMs), transformers, embeddings, and modern AI architectures.
Implement Retrieval-Augmented Generation (RAG) pipelines to ground LLMs in external knowledge, reduce hallucinations, and enable enterprise-grade AI application
Develop autonomous agentic AI systems using tool calling, multi-step reasoning, memory, and human-in-the-loop controls.
Create full-stack LLM applications by integrating FastAPI backends, streaming chat interfaces, frontend UX patterns, and stateful memory management.
Optimize AI systems for cost, latency, and scalability using token optimization, caching strategies, model selection tradeoffs, and load management techniques.
Evaluate and monitor LLM outputs using human and automated evaluation methods to ensure accuracy, relevance, and faithfulness.
Apply security, safety, and governance best practices by implementing guardrails, output filtering, policy-based controls, and responsible AI framework

Requirements
Basic programming knowledge (Python preferred, but not mandatory at an expert level)
General understanding of APIs or web applications (helpful, not required)
Curiosity about AI and willingness to build hands-on projects

Description
“This course contains the use of artificial intelligence”This course is a comprehensive, hands-on journey into Generative AI and Large Language Models (LLMs) designed specifically for Full-Stack AI Engineers. Unlike high-level or theory-only courses, this program focuses on how modern AI systems are actually built, deployed, optimized, and governed in production environments.You will move beyond simple prompt experiments and learn how to engineer reliable, scalable, and enterprise-ready AI systems using LLMs, embeddings, retrieval, agents, tools, and full-stack application architectures. Every section of this course includes a step-by-step hands-on lab, ensuring you not only understand the concepts but also implement them in real code.Section 1 — Introduction to Generative AIYou will build strong conceptual foundations by understanding Generative AI vs Discriminative Models, why generative systems matter, and how they are used across real-world industries such as enterprise software, healthcare, finance, and aviation. Hands-on Lab: Compare discriminative vs generative models, generate text using transformer-based models, and map real-world generative AI use cases.Section 2 — Transformer Architecture & LLM FundamentalsThis section demystifies how transformers actually work, including self-attention, positional encoding, and encoder vs decoder architectures. You’ll also explore tokenization, embeddings, context windows, and how LLMs are trained using pretraining, fine-tuning, instruction tuning, and RLHF. Hands-on Lab: Implement self-attention concepts, visualize tokenization and embeddings, and simulate LLM training workflows at a high level.Section 3 — Large Language Models in PracticeYou will work hands-on with popular LLM families including GPT, Claude, Gemini, LLaMA, Mistral, and Falcon, and learn how to choose the right model based on quality, cost, latency, and use case requirements. Hands-on Lab: Build a multi-model evaluation harness, test hallucinations and bias, and integrate LLM APIs using temperature, top-p, and max tokens.Section 4 — Prompt Engineering for EngineersThis section teaches prompt engineering as a software engineering discipline, covering system, user, and assistant roles, zero-shot, one-shot, and few-shot prompting, and advanced techniques like chain-of-thought, self-consistency, and constraint-based prompting. Hands-on Lab: Design robust prompt templates, defend against prompt injection, and implement input/output validation for safe prompting.Section 5 — Embeddings & Semantic SearchYou’ll learn how vector embeddings represent meaning, how cosine similarity and dot product work, and how to build semantic search pipelines using chunking strategies, embedding generation, and similarity-based retrieval. Hands-on Lab: Build a semantic search system using FAISS and Chroma, compare chunking strategies, and evaluate retrieval accuracy.Section 6 — Retrieval-Augmented Generation (RAG)This section shows how to eliminate hallucinations by grounding LLMs with external knowledge using RAG architectures, document ingestion pipelines, retriever–generator flows, and context window management. Hands-on Lab: Build a full RAG pipeline, implement hybrid search, apply re-ranking strategies, and perform multi-document reasoning with citations.Section 7 — Tool Calling & Function-Based LLMsYou will learn how to make LLMs interact with real systems using function calling, structured JSON outputs, and API-based tools, enabling models to take meaningful actions. Hands-on Lab: Build tool-using agents, implement stateless and stateful tools, add validation and error handling, and create multi-step tool chains with observability.Section 8 — Agentic AI SystemsThis section focuses on building autonomous AI agents with planning, memory, execution, and self-correction using architectures such as ReAct, Planner–Executor, and multi-agent systems. Hands-on Lab: Build autonomous agents, implement long-term memory, enable task decomposition, and add human-in-the-loop (HITL) control.Section 9 — Full-Stack LLM Application DevelopmentYou’ll integrate AI into real applications using FastAPI-based backends, streaming responses, and frontend chat interfaces, while managing state, memory, and context across sessions. Hands-on Lab: Build a full-stack LLM application with streaming chat, session memory, persistent storage, and context pruning strategies.Section 10 — Evaluation, Cost & Performance OptimizationThis section teaches how to measure and optimize AI systems using human and automated evaluation, accuracy, relevance, and faithfulness metrics, and how to reduce costs through token optimization, caching, and model routing. Hands-on Lab: Build an evaluation harness, implement response caching, compare model tiers, and perform latency and load testing.Section 11 — Ethics, Security & Responsible AIYou’ll learn how to deploy AI responsibly using guardrails, output filtering, policy-based controls, and enterprise governance frameworks to ensure safety, compliance, and trust. Hands-on Lab: Implement security defenses, prompt injection protection, output validation, and enterprise-ready AI governance workflows.By the End of This Course, You Will Be Able To:Build production-ready generative AI systemsDesign robust prompts and agent architecturesImplement RAG pipelines and semantic searchDevelop full-stack LLM applicationsOptimize cost, latency, and scalabilityDeploy secure, governed, enterprise-grade AI


Password/解压密码www.tbtos.com

资源下载此资源仅限VIP下载,请先

转载请注明:0daytown » Full Stack AI Engineer 2026 – Generative AI & LLMs III

您必须 登录 才能发表评论!