Agentic AI Security – Data Masters
Agentic AI Security – Data Masters
125.00 Il prezzo originale era: €125.00.9.00Il prezzo attuale è: €9.00. Aggiungi al carrello
In offerta!

Agentic AI Security – Data Masters

Il prezzo originale era: €125.00.Il prezzo attuale è: €9.00.

-93%

Descrizione

Scarica il corso “Agentic AI Security – Data Masters”

Acquisisci le competenze fondamentali per la sicurezza dei tuoi sistemi AI: analizza le minacce, mitiga i rischi e progetta agenti AI che interagiscono con i tuoi dati in modo sicuro e controllabile

  • LLM Security e Prompt Injection
  • Agentic AI Security Architectures
  • OWASP Top 10 per LLMs
  • MITRE ATLAS per Sistemi AI

Gli agenti AI stanno entrando ovunque: prodotti digitali, workflow aziendali e processi operativi. Accedono a dati, usano tool, chiamano API e prendono decisioni. In questo contesto, la sicurezza degli agenti è l’evoluzione più urgente della cybersecurity.

In questo corso impari a riconoscere e modellare le minacce tipiche dei sistemi AI moderni, andando oltre la sicurezza tradizionale: prompt injection, jailbreak, data leakage, poisoning, supply chain attacks, e rischi legati all’eccessiva autonomia degli agenti.

Acquisisci tutte le competenze per progettare strategie di mitigazione concrete, valutare i trade-off di sicurezza e costruire sistemi AI più robusti, controllabili e affidabili già dalla fase di design.

 

📁 SFOGLIA CONTENUTO CORSO
📁 1. Introduzione alla Agentic AI Security78,60MB
🎬 1. 01 – Introduzione alla Agentic AI Security – 7:12.mp417,48MB
📄 1. 01_-_Introduzione_.pdf1,62MB
🎬 2. 02 – OWASP GenAI Security Project – 1:36.mp45,61MB
🎬 3. 03 – MITRE ATLAS – 11:56.mp446,01MB
🎬 4. 04 – Siamo pronti a partire – 3:11.mp47,87MB
📁 2. LLM01 [extended] – Prompt Injection & Jailbreak264,92MB
📄 1. 01_-_LLM01_extended_-_Prompt_Injection_Jailbreak_.pdf1,80MB
🎬 1. LLM01 [extended] 01 – Prompt Injection & Jailbreak – 23:14.mp460,35MB
📄 2. 02_-_Common_Attacks_.pdf3,31MB
🎬 2. LLM01 [extended] 02.1 – Commons Attacks – Parte 1 – 27:00.mp474,76MB
📄 3. 02_-_Common_Attacks_.pdf3,31MB
🎬 3. LLM01 [extended] 02.2 – Commons Attacks – Parte 2 – 20:42.mp453,80MB
📄 4. 03_-_Common_Mitigations_.pdf2,37MB
🎬 4. LLM01 [extended] 03 – Common Mitigations – 27:35.mp465,09MB
📄 5. LLM01 [extended] 04 – Sample Attack Scenarios da report OWASP – Approfondimento.pdf131,25KB
📁 3. LLM02 – Sensitive Information Disclosure72,75MB
📄 1. 01_-_LLM02_-_Sensitive_Information_Disclosure_.pdf1,27MB
🎬 1. LLM02 01 – Sensitive Information Disclosure – 9:45.mp423,99MB
📄 2. 02_-_Mitigation_.pdf1,82MB
🎬 2. LLM02 02 – Mitigations – 14:16.mp434,37MB
📄 3. 03.01_-_2023_March_20_ChatGPT_outage_.pdf955,01KB
🎬 3. LLM02 03.01 – 2023 March 20 ChatGPT Outage – 4:36.mp410,36MB
📁 4. LLM03 – Supply Chain95,49MB
📄 1. 01_-_Suppy_Chain_.pdf760,20KB
🎬 1. LLM03 01 – Supply Chain – 4:17.mp49,29MB
📄 2. 02_-_Common_Risks_.pdf1,90MB
🎬 2. LLM03 02 – Common Risks – 18:12.mp442,24MB
📄 3. 03_-_Mitigation_.pdf2,34MB
🎬 3. LLM03 03 – Mitigations – 2:20.mp438,99MB
📁 5. LLM04 – Data and Model Poisoning85,55MB
🎬 1. LLM04 01 – Data and Model Poisoning – 2:52.mp47,08MB
📄 1. LLM04_01_-_Data_and_Model_Poisoning_.pdf616,52KB
🎬 2. LLM04 02 – Data Poisoning – 12:24.mp428,80MB
📄 2. LLM04_02_-_Data_Poisoning_.pdf1,67MB
🎬 3. LLM04 03 – Model Poisoning – 7:31.mp418,22MB
📄 3. LLM04_03_-_Model_Poisoning_.pdf1,53MB
🎬 4. LLM04 04 – Mitigations – 10:59.mp426,10MB
📄 4. LLM04_04_-_Mitigations_.pdf1,57MB
📁 6. LLM05 – Improper Output Handling56,06MB
🎬 1. LLM05 01 – Improper Output Handling – 6:12.mp414,24MB
📄 1. LLM05_01_-_Improper_Output_Handling_.pdf1,03MB
🎬 2. LLM05 02 – Common Risks – 4:22.mp49,76MB
📄 2. LLM05_02_-_Common_Risks_.pdf718,21KB
🎬 3. LLM05 03 – Mitigations – 13:01.mp428,73MB
📄 3. LLM05_03_-_Mitigations_.pdf1,60MB
📁 7. LLM06 – Excessive Agency106,60MB
🎬 1. LLM06 01 – Excessive Agency – 13:57.mp432,00MB
📄 1. LLM06_01_-_Excessive_Agency_.pdf1,03MB
🎬 2. LLM06 02 – Common Risks – 13:31.mp430,42MB
📄 2. LLM06_02_-_Common_Risks_.pdf1,42MB
🎬 3. LLM06 03 – Mitigations – 16:55.mp439,84MB
📄 3. LLM06_03_-_Mitigations_.pdf1,90MB
📁 8. LLM07 – System Prompt Leakage69,15MB
🎬 1. LLM07 01 – System Prompt Leakage – 8:31.mp420,06MB
📄 1. LLM07_01_-_System_Prompt_Leakage_.pdf1,04MB
🎬 2. LLM07 02 – Common Risks – 8:20.mp420,53MB
📄 2. LLM07_02_-_Common_Risks_.pdf1,17MB
🎬 3. LLM07 03 – Mitigations – 10:18.mp424,77MB
📄 3. LLM07_03_-_Mitigations_.pdf1,58MB
📁 9. LLM08 – Vector and Embedding Weaknesses63,28MB
🎬 1. LLM08 01 – Vector and Embedding Weaknesses – 9:13.mp420,60MB
📄 1. LLM08_01_-_Vector_and_Embedding_Weaknesses_.pdf1,42MB
🎬 2. LLM08 02 – Common Risks – 9.58.mp422,72MB
📄 2. LLM08_02_-_Common_Risks_.pdf1,40MB
🎬 3. LLM08 03 – Mitigations – 6:52.mp415,97MB
📄 3. LLM08_03_-_Mitigations_.pdf1,18MB
📁 10. LLM09 – Misinformation67,14MB
🎬 1. LLM09 01 – Misinformation – 7:21.mp417,17MB
📄 1. LLM09_01_-_Misinformation_.pdf1,18MB
🎬 2. LLM09 02 – Common Risks – 9:33.mp421,90MB
📄 2. LLM09_02_-_Common_Risks_.pdf1,42MB
🎬 3. LLM09 03 – Mitigations – 10:42.mp423,88MB
📄 3. LLM09_03_-_Mitigations_.pdf1,59MB
📁 11. LLM10 – Unbounded Consumption47,17MB
🎬 1. LLM10 01 – Unbounded Consumption – 3:39.mp49,09MB
📄 1. LLM10_01_-_Unbounded_Consumption_.pdf805,63KB
🎬 2. LLM10 02 – Common Risks – 7:00.mp416,22MB
📄 2. LLM10_02_-_Common_Risks_.pdf1,17MB
🎬 3. LLM10 03 – Mitigations – 7:59.mp418,82MB
📄 3. LLM10_03_-_Mitigations_.pdf1,10MB