Handwritten Text Recognition for manuscripts and early printed texts
Semantic kernel - Do you need Python to play with LLM?
1. Do you need Python to play
with LLM?
Marco De Nittis
Independent cloud architect
marco.denittis [a] gmail.com | @mdnmdn
First steps with Semantic Kernel
2. Who am I?
• Independent cloud architect
• Trainer
• ❤ cloud, serverless, devops, AI, wasm
• Curious and tinkerer
3. Objectives
• A gentle introduction to LLM-ized apps
• First steps with Semantic Kernel
4. BTW what is a LLM?
• Probabilistic engine: most probable text
response from input
• according to the corpus
• Pure function (almost)
• No dynamic/short term memory
string llm(string input,…){
…
}
5. Why in our app?
• Add some kind of intelligence
• Retrieve and summary info semantically
• Understand unstructured input
• Generate “creative” data
6. How
• Train custom AI model
• Refine existing models
• Consume 3rd party models via API
• OpenAI
• Azure AI services
• Hugging face
• …
7. Semantic Kernel …what?
• Integrate AI services with apps
• Open source SDK
• By Microsoft with ❤
• Enhance base functionality of AI API
• High level functions
• Overlaps with python fwk as langchain
8. • Multiplaform
• Features
• Connectivity to AI services
• Custom “functions" to empower AI
• Integrated memory support
• Orchestrate AI to use all features available
• Assistants API
Semantic Kernel 2
NEW
11. Architecture
• Connectors:
• consume AI model, vector DBs
• Plugins:
• Provide LLM enhancements
• “Make a summary”, “Sentiment analysis”
• Integrates with external systems
• “Send a message”, “List of discord users"
12. Plugins/functions
• Plugins: group of functions
• Functions: enhance model capabilities
• AI, prompt based:
• “Summarize”, “Write an haiku"
• Code based:
• “GetTime”, “Send a message”, “Search in internet”
13. Memory of a LLM
• LLMs have no memory
• Several techniques to provide custom data:
• Include all infos in prompt
• Eg: all messages of chat
• Retrieval Augmented Generation (RAG)
• Vector DB + Semantic Search
• Fine tuning
14. RAG
• Include in prompt only relevant data
• Data are semantically searchable in a vector DB
• Embeddings:
• conversion from text to a vector of floats
• Coordinates in a “space of concepts”
• Vector DB makes vector searchable by similarity
17. Semantic Kernel functions
• Specialized Behaviours
• Zero or more input parameters
• Text output
• Controlled and invoked by the LLM
• Function and parameters are decorated with text description to
be understood and used by an LLM
18. Semantic Functions
• Functions "executed" in an LLM
• Eg: “SummarizeText"
• Based on prompt engineering
• Modes:
• Inline (strings in C#)
• Textual/templated: defined by text + json metadata
19. Native Function
• Expose capabilities provided via code:
• GetTime
• Send a mail
• Access internet
• Function and parameters decorated via textual descriptions
• 2 degrees of intelligence:
• Code inside the function
• Elaboration of result by AI
20. Planners
• Orchestrates chains of functions
• Let the AI decide:
• Which functions to call
• How use the parameters/results
• How to compose them
• Awesome results
• Cost intensive
22. Challenges
• Let act the LLM in a efficient way
• Prompt injection
• Appropriate handling of business data
• Predictability
• Cost management
• Sustainability