Channel Avatar

Learn Data with Mark @[email protected]

15K subscribers - no pronouns :c

Weekly 5-7 minute videos on data and LLMs.


04:25
Taking Gemma3 for a spin
02:34
Why every data analyst needs the DuckDB GSheets plugin?
04:22
Querying structs just got 6x faster in DuckDB 1.2
03:22
DuckDB 1.2 CLI Features That Will Make Your Life EASIER!
04:09
4 Ways DuckDB 1.2 makes SQL even friendlier
04:08
Level up your Python scripts with uv
06:33
An intro to Anthropic MCP with DuckDB
07:54
Agentic Analytics with PhiData and DuckDB
04:07
Building an AI agent with PhiData and Streamlit
04:07
How to add memory to a PhiData agent
05:12
An intro to the PhiData agent library
05:08
Do LLMs understand markdown tables?
07:50
Intro to burr: A State Machine for LLM apps
04:27
Llama 3.2-vision: The best open vision model?
03:30
Moonshine: Real-Time Speech-To-Text on your laptop
04:08
NuExtract: An LLM that extracts information
04:12
Using LLMs on the command line
02:32
Ollama: Running Hugging Face GGUF models just got easier!
03:52
The fastest way to run OpenAI Whisper Turbo on a Mac
03:32
Ollama: How to send multiple prompts to vision models
04:09
Running OpenAI Whisper Turbo on a Mac
04:43
An intro to rerankers: A uniform API for reranking models
05:06
DuckDB dynamic column selection gets even better
05:21
Ollama and LanceDB: The best combination for Local RAG?
03:13
Searching images on my laptop with LanceDB
06:14
Rewriting RAG Queries with OpenAI Structured Outputs
03:23
DuckDB function chaining: The simpler SQL you didn't know you needed
05:50
Why OpenAI's new Structured Outputs feature is awesome!
07:18
What Are Matryoshka Embeddings?
06:45
How to evaluate retrieval in RAG pipelines
06:30
Hybrid Search for RAG in DuckDB (Reciprocal Rank Fusion)
05:53
Full-Text Search vs Vector Search (RAG with DuckDB)
07:35
Search-Based RAG with DuckDB and GLiNER
08:38
Local RAG with llama.cpp
05:01
A UI to quantize Hugging Face LLMs
05:19
Mistral 7B Function Calling with llama.cpp
06:51
Does Mistral 7B function calling ACTUALLY work?
05:55
Mistral 7B Function Calling with Ollama
06:38
Hugging Face SafeTensors LLMs in Ollama
03:57
Are LLaVA variants better than original?
04:54
An Ollama Chatbot Arena (with Streamlit)
04:01
Ollama can run LLMs in parallel!
05:35
Serverless GenAI with Beam (GPU as a service)
03:52
Voice to Text on a Mac with insanely-fast-whisper
05:21
How does OpenAI Function Calling work?
03:51
Semantic Router: No more rogue LLM chatbots?
03:47
Running LLMs on a Mac with llama.cpp
05:43
GLiNER: Easiest way to do Entity Extraction in 2024?
05:36
Visualising embeddings with t-SNE
08:05
Exploring the comments of AI YouTube channels
04:53
Google Gemma 2B vs 7B with Ollama
04:52
SLIM: Small models for specific tasks by LLMWare
04:44
Ollama adds OpenAI API support
04:52
Content Discovery with Embeddings (ft. Qdrant/FastEmbed)
05:41
LLaVA 1.6 is here...but is it any good? (via Ollama)
04:30
Ollama has a Python library!
06:01
Langroid: Chat to a CSV file using Mixtral (via Ollama)
06:11
User-Selected metadata in RAG Applications with Qdrant
05:39
Building a local ChatGPT with Chainlit, Mixtral, and Ollama
05:39
Constraining LLMs with Guidance AI