Introducing Veritas Alta™ Copilot
Veritas Alta Copilothas been designed using the latest large language models to reduce the complexity of enterprise data management and is now available for users ofVeritas Alta View. It can create sophisticated reports to monitor data protection infrastructure through simple conversation in natural language, quickly identify cyber vulnerabilities and operational inefficiencies, proactively assist with troubleshooting and guide users through complex data management tasks. Alta Copilot has been trained with Veritas best practices and product documentation. It empowers multi-tasking IT generalists to drive the level of optimal performance from their technology investment typically only achieved with management and configuration by highly specialized experts. Learn more about VeritasAlta Copilot: https://www.veritas.com/blogs/introducing-veritas-alta-copilot-simplifying-data-management-with-ai441Views1like0CommentsWhat is Retrieval-Augmented Generation or RAG?
RAG explained in 60 seconds:https://youtu.be/bN9oGxZCIU8?si=1f9Ax3wbr3sFFyxq Over the past few weeks, a number of people have asked me about Retrieval Augmented Generation (RAG). Here is a brief overview of RAG and how it enhances LLMs. TL;DR: RAG is a framework that upgrades Large Language Models (LLMs) by integrating real-time, enterprise-specific data. This approach keeps AI outputs up-to-date, relevant, and more aligned with business needs, addressing the typical knowledge cutoff of LLMs. Deep Dive: RAG represents a significant shift in LLM application, enabling these models to dynamically access and utilize updated external data. This method not only keeps the AI's responses current but also tailors them to specific business contexts. Key Advantages of RAG: 1. Addresses Knowledge Cutoff: RAG allows LLMs to access fresh data beyond their training set, ensuring timely and relevant outputs. 2. Customizable AI Responses: By connecting to specific enterprise databases or documents, RAG delivers insights that are directly applicable to the business. 3. Efficiency in Updating Models: It offers a practical alternative to frequent model retraining, saving time and resources. How RAG Works: RAG combines the generative power of LLMs with a retrieval function. This function searches connected data sources for information relevant to a given query, which the LLM then uses to generate informed and current responses. Applications in Business: Practical uses of RAG include enhancing customer service bots with the latest product data or equipping financial analysis tools with real-time market information, making AI systems more responsive and effective in a business setting. Image Source: Generative AI with Large Language Models (DeepLearning.AI)1KViews0likes0CommentsNavigate Cyber Resilience in the Age of Generative AI
As the AI hype continues to build, it's crucial to separate fact from fiction, especially when it comes to cybersecurity. Renowned technologist and thought leader,Sally Eavesrecently joined me to dive deep into the evolving landscape of cyber resilience in the age of generative AI. Watch our conversation here:https://vrt.as/4bzFXBo Learn more about howVeritashelps you navigate your Generative AI journey: Navigate Cyber Resilience in the Age of Generative AI461Views1like0CommentsWhat is finetuning?
Generative AI Fundamentals Part 5 - Fine-tuning: 60 second version:https://youtu.be/oEbOJiYIRxE?si=7u0PTFrWKEAiHgvH What is Fine-tuning? Fine-tuning involves tailoring a pre-trained model to perform specific tasks. While you could do unsupervised fine-tuning, in most cases it is a supervised learning process where you use a dataset of labeled examples with prompts and responses to update the weights of an LLM & make the model improve its ability for specific tasks such as translation, summarization, writing articles etc. Once a model has been fine-tuned, you won't need to provide as many examples in the prompt. This saves costs and enables lower-latency requests. How does Fine-tuning work? At a high level, fine-tuning involves the following steps: - Prepare and upload training data - Train a new fine-tuned model - Evaluate results and go back to step 1 if needed - Use your fine-tuned model Detailed explanation:https://youtu.be/7Qkzn6r5H1M?si=UNSbK-fFeCSlO6zd In the next video, we will dive into the hot topic of RAG or Retrieval-Augmented Generation. Thanks for tuning in234Views0likes0CommentsIntro to Large Language Models: Pre-training, fine-tuning, and RAG
Generative AI Fundamentals: In the Generative AI development process, understanding the distinctions between pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation) is crucial for efficient resource allocation and achieving targeted results. Here’s a comparative analysis for a practical perspective: Pre-training: • Purpose: To create a versatile base model with a broad grasp of language. • Resources & Cost: Resource-heavy, requiring thousands of GPUs and significant investment, often in millions. • Time & Data: Longest phase, utilizing extensive, diverse datasets. • Impact: Provides a robust foundation for various AI applications, essential for general language understanding. Fine-tuning: • Purpose: Customize the base model for specific tasks or domains. • Resources & Cost: More economical, utilizes fewer resources. • Time & Data: Quicker, focused on smaller, task-specific datasets. • Impact: Enhances model performance for particular applications, crucial for specialized tasks and efficiency in AI solutions. RAG: • Purpose: Augment the model’s responses with external, real-time data. • Resources & Cost: Depends on retrieval system complexity. • Time & Data: Varies based on integration and database size. • Impact: Offers enriched, contextually relevant responses, pivotal for tasks requiring up-to-date or specialized information. So what? Understanding these distinctions helps in strategically deploying AI resources. While pre-training establishes a broad base, fine-tuning offers specificity. RAG introduces an additional layer of contextual relevance. The choice depends on your project’s goals: broad understanding, task-specific performance, or dynamic, data-enriched interaction. Effective AI development isn’t just about building models; it’s about choosing the right approach to meet your specific needs and constraints. Whether it’s cost efficiency, time-to-market, or the depth of knowledge integration, this understanding guides you to make informed decisions for impactful AI solutions. Save the snapshot below to have this comparative analysis at your fingertips for your next AI project.810Views0likes0CommentsWhat is Pre-training?
Generative AI Fundamentals Part 4 - Pre-training: https://youtu.be/R75Sy88zSEI?si=be9PFTSr8N5cDtGV What is Pre-training? Pre-training is the process of teaching a model to understand and process language before it's fine-tuned for specific tasks. It involves exposing the model to vast amounts of text data. How does Pre-training work? During pre-training, the model learns to predict the next word in a sentence, understand context, and capture the essence of language patterns. This is done through unsupervised learning, where the model organizes and makes sense of data without explicit instructions. How to train your ChatGPT? 1. Download ~10TB of text. 2. Get a cluster of ~6,000 GPUs. 3. Compress the text into a neural network, pay ~$2M, wait ~12 days. 4. Obtain base model. (Numbers sourced from "Intro to LLMs" by Andrej Karpathy) In the next couple of videos we will talk about Fine-Tuning and Retrieval-Augmented Generation (RAG). Thanks for tuning in!315Views0likes0CommentsWhat is a Transformer Model?
Generative AI Fundamentals Part 3: https://youtu.be/WJVkwFBe2rY?si=Jgh31TpzxNejwbOm If you have looked up the full-form of ChatGPT, you know that GPT stands for Generative Pre-Trained Transformer. What is a Transformer Model? A transformer model is a neural network that learns context and thus meaning by tracking relationships in sequential data like the words in this sentence. It was first introduced in a 2017 research paper by Google titled “Attention is All You Need”. At the time, it was showcased as a translation model but its applications in text generation have become exceedingly popular. How does a Transformer Model work? Transformer models apply an evolving set of mathematical techniques, called attention or self-attention, to detect subtle ways even distant data elements in a series influence and depend on each other. The power of the transformer architecture lies in its ability to learn the relevance and context of all of the words in a sentence. Not just for each word to its neighbor, but to every other word in a sentence. The model can then apply attention weights to those relationships and learn the relevance of each word to each other word, no matter where they are in the input. Transformer Models provide a step-change in performance over RNNs or Recurrent Neural Networks by parallelizing the process of text generation which was previously conducted sequentially, and was therefore more limiting. This paved the way for the Generative AI revolution we are experiencing today. Stay tuned for the next video where we will dive deeper into pre-training and fine-tuning LLMs.192Views0likes0CommentsWhat is a Large Language Model?
Generative AI Fundamentals Part 2: https://youtu.be/HBHhYgH3RiE?si=kZaxQdeiAnMHkPux What is a Large Language Model? A multi-billion parameter language model trained on extensive text datasets to predict the next word in a self-supervised manner. How does a Large Language Model work? LLMs work by analyzing vast amounts of text, learning statistical patterns and relationships between words, and using this knowledge to generate text that is coherent and contextually appropriate. For a deeper dive, check out Andrej Karpathy's talk: https://lnkd.in/gshNJUkP161Views0likes0CommentsWhat is Generative AI?
Introducing Part 1 of the Generative AI Fundamentals Series: https://youtu.be/Hr1_DPh1sEU?si=4j4Ere4ImZHObjEo What is Generative AI? AI systems that can generate new content based on a variety of inputs. Inputs and outputs to these models can include text, images, audio, 3D models, or other types of data. Large Language Models (LLMs) such as GPT-4 and Llama 2 are popular examples of Generative AI. How Does Generative AI Work? Generative AI models employ neural networks to identify patterns and structures in existing data, enabling them to generate new and original content. A notable breakthrough in generative AI is the ability to apply various learning approaches, including unsupervised or semi-supervised learning, in their training. This has empowered organizations to more effectively and rapidly utilize large volumes of unlabeled data to create foundational models. These models, as their name suggests, serve as a base for AI systems capable of performing multiple tasks. Examples of these foundational models include GPT-4 and Stable Diffusion, which enhance the user's ability to harness the nuances of language. For instance, applications such as ChatGPT, powered by GPT-4, allow users to generate essays from short text prompts. In contrast, Stable Diffusion can create photorealistic images based on text inputs. In the next video we will dive deeper into Large Language Models. Stay tuned!216Views1like0Comments
Group Content
About Veritas AI Group
Join this group to connect with Veritas subject matter experts and industry peers on AI.
Owned by: benspickardCreated: 2 years agoOpen Group