Channel Avatar

nDimensionsAI @[email protected]

3.4K subscribers - no pronouns :c

Tech Enthusiast Data insights Discoverer AI algorithm Wonder


Welcoem to posts!!

in the future - u will be able to do some more stuff here,,,!! like pat catgirl- i mean um yeah... for now u can only see others's posts :c

nDimensionsAI
Posted 3 months ago

🚀 OpenHands: Code Less, Build More! 🤖💡

The AI revolution in software development is here—and if you’re not using OpenHands, you’re already falling behind! 😱

⚡ What is OpenHands?
- AI-powered coding agents that write, modify, and execute code autonomously—just like a real developer!
- Integrates with LLMs like Claude 3.5 Sonnet for next-level AI-driven development.
- Runs effortlessly with Docker—set up in minutes, no complex configs!
- Supports CLI interaction, GitHub Actions, local filesystems, and more!

💡 Why You CAN’T Afford to Miss This:
- Say goodbye to manual coding—AI does the heavy lifting for you.
- Boost productivity with AI-driven automation.
- Stay ahead of the game—don't let others outpace you in the AI era!

🎯 Designed for solo devs & AI enthusiasts—but don’t wait until everyone’s already using it!

🚀 Star it on GitHub, join the community, and start building smarter today!

#AI #Coding #Automation #LLM #Claude35 #OpenSource #DevTools #OpenHands

0 - 0

nDimensionsAI
Posted 3 months ago

🚀 AI is Reshaping the Economy—Are You Ready?

AI is Transforming the Economy – Don’t Get Left Behind!

Anthropic just launched the Economic Index, a game-changing initiative tracking AI’s real-world impact! 📊✨

🔍 The first report dives into millions of anonymized Claude conversations, revealing how AI is revolutionizing industries, automating tasks, and boosting productivity.

💡 Are you using AI to stay ahead—or falling behind the curve?

🔗 Get the insights now & future-proof your work!

#AIRevolution #FutureOfWork #Anthropic #Claude

0 - 0

nDimensionsAI
Posted 3 months ago

🚀 Revolutionize Your Data Game with Wren AI! 🤖📊

Data-driven teams, are you still stuck writing SQL queries manually? ⏳ Stop wasting time and start chatting with your data! 💬✨

🔥 Meet Wren AI – The Open-Source GenBI AI Agent! 🔥

🔹 Instantly generate Text-to-SQL, charts, spreadsheets, reports & BI insights—without writing a single line of code!
🔹 Supports multiple LLMs (OpenAI, Gemini, Bedrock, Groq & more!)—choose your AI power! ⚡
🔹 Ask questions in any language & get instant AI-powered insights & visualizations! 🌍📊
🔹 Export data directly to Excel & Google Sheets—because insights should be actionable! ✅

🚀 Your competitors are already leveraging AI-powered BI… don’t get left behind! 😱

💡 Wren AI is 100% OPEN-SOURCE! Don’t miss out—try it now & transform your data workflow!

#AI #DataAnalytics #TextToSQL #BusinessIntelligence #GenBI #WrenAI #DataScience #MachineLearning #SQL

0 - 0

nDimensionsAI
Posted 11 months ago

What are 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 (𝗥𝗔𝗚) 𝗦𝘆𝘀𝘁𝗲𝗺𝘀?



Here is an example of a simple RAG based Chatbot to query your Private Knowledge Base.



First step is to store the knowledge of your internal documents in a format that is suitable for querying. We do so by embedding it using an embedding model:

𝟭: Split text corpus of the entire knowledge base into chunks - a chunk will represent a single piece of context available to be queried. Data of interest can be from multiple sources, e.g. Documentation in Confluence supplemented by PDF reports.
𝟮: Use the Embedding Model to transform each of the chunks into a vector embedding.
𝟯: Store all vector embeddings in a Vector Database.
𝟰: Save text that represents each of the embeddings separately together with the pointer to the embedding (we will need this later).



Next we can start constructing the answer to a question/query of interest:

𝟱: Embed a question/query you want to ask using the same Embedding Model that was used to embed the knowledge base itself.
𝟲: Use the resulting Vector Embedding to run a query against the index in the Vector Database. Choose how many vectors you want to retrieve from the Vector Database - it will equal the amount of context you will be retrieving and eventually using for answering the query question.
𝟳: Vector DB performs an Approximate Nearest Neighbour (ANN) search for the provided vector embedding against the index and returns previously chosen amount of context vectors. The procedure returns vectors that are most similar in a given Embedding/Latent space.
𝟴: Map the returned Vector Embeddings to the text chunks that represent them.
𝟵: Pass a question together with the retrieved context text chunks to the LLM via prompt. Instruct the LLM to only use the provided context to answer the given question. This does not mean that no Prompt Engineering will be needed - you will want to ensure that the answers returned by LLM fall into expected boundaries, e.g. if there is no data in the retrieved context that could be used make sure that no made up answer is provided.



To make it a real Chatbot - face the entire application with a Web UI that exposes a text input box to act as a chat interface. After running the provided question through steps 1. to 9. - return and display the generated answer. This is how most of the chatbots that are based on a single or multiple internal knowledge base sources are actually built nowadays.

As described, the system is really just a naive RAG that is usually not fit for production grade applications. You need to understand all of the moving pieces in the system in order to tune them by applying advanced techniques, consequently transforming the Naive RAG to Advanced RAG fit for production. More on this in the upcoming posts, so stay tuned in!

#LLM hashtag#GenAI hashtag#LLMOps hashtag#MachineLearning

2 - 2