@arconsis
November 6, 2024
Meet Our Speakers: Adrian and Lars at
the "InfoDay Generative Al für Developer" The German speaking conference "InfoDay Generative Al für Developer" is the three-day online event (18. -20. Nov
2024) for anyone who wants to dive deeper into
Generative Al and learn how to effectively integrate it into new and existing business applications. From Agents to Large Language Models, Embeddings, RAG, Fine-Tuning, and Frameworks like Langchain and Llamalndex, all the way to Zero-Shot Prompting-this event offers not only presentations but hands-on implementation and discussions! In recent years, the use of artificial intelligence (Al) and in particular large language models (LLMs) has increased significantly. However, this poses risks to personally identifiable information (PII) as Al systems store input text and use it for future training purposes. Adrian and Lars will showcase in their talk a method to de-identify prompts by replacing sensitive information with neutral, semantically identical data. For this purpose, the input is first anonymized by a self-hosted backend, then passed on to the actual LLM and finally the response is returned to its original form by re-identification and sent to the user. This improves data protection when using external Al services.
.png)
Meet Our Speakers: Adrian and Lars at
the "InfoDay Generative Al für Developer" The German speaking conference "InfoDay Generative Al für Developer" is the three-day online event (18. -20. Nov
2024) for anyone who wants to dive deeper into
Generative Al and learn how to effectively integrate it into new and existing business applications. From Agents to Large Language Models, Embeddings, RAG, Fine-Tuning, and Frameworks like Langchain and Llamalndex, all the way to Zero-Shot Prompting-this event offers not only presentations but hands-on implementation and discussions! In recent years, the use of artificial intelligence (Al) and in particular large language models (LLMs) has increased significantly. However, this poses risks to personally identifiable information (PII) as Al systems store input text and use it for future training purposes. Adrian and Lars will showcase in their talk a method to de-identify prompts by replacing sensitive information with neutral, semantically identical data. For this purpose, the input is first anonymized by a self-hosted backend, then passed on to the actual LLM and finally the response is returned to its original form by re-identification and sent to the user. This improves data protection when using external Al services.