Generative AI as a Learning Tool – O'Reilly

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

At O'Reilly, we're not just creating training content about AI. We're also using it to create new kinds of learning experiences. One of the ways we're putting AI to work is with our updates to responses. Answers is a creative AI-powered feature that aims to answer questions in the learning flow. It's in every book, on-demand course, and video and will eventually be available across our entire education platform. To view it, click the “Answer” icon (the last item in the list on the right side of the screen).


Learn fast. Dig deep. Look ahead.

answers
Generative AI as a Learning Tool - O'Reilly 3

Answers enable active learning: Interact with the content by asking questions and getting answers, rather than simply entering a sequence from a book or video. If you're solving a problem for work, it puts learning into the work flow. It's natural to have questions when you're working on something. Those of us who remember hard copy books also remember having a stack of books on our desks open upside down (to save the page) as we delved deeper into a problem. . Something similar happens online: you open so many tabs looking for an answer that you don't remember which is which. Why can't you just ask a question and get an answer? Now you can.

Here are some insights into the decisions we made in the process of creating the answers. Of course, everything is subject to change; This is the first thing you need to realize before starting any AI project. This is uncharted territory. Everything is an experience. You don't know how people will use your application until you build and deploy it. There are many questions about the answers that we are still waiting for answers. It's important to be careful when deploying an AI application, but it's also important to know that all AI is experimental.

A core of responses was created. Collaborate with a partner which provided AI expertise. This is an important rule, especially for small companies: don't build by yourself when you can partner with others. It would have been very difficult to develop the expertise to build and train a model, and it would have been much more efficient to work with a company that already had this expertise. There will be many decisions and issues for your staff to address and resolve. Leave the heavy AI lifting to someone else, at least for the first few products. Focus on understanding the problem you are solving. What are your specific use cases? What kind of responses will your customers expect? What kind of answers do you want to give? Think about how the answers to these questions affect your business model.

If you build a service like chat, you should think seriously about how it will be used: what kind of prompts to expect and what kind of responses to return. Answers places some restrictions on the questions you can ask. While most users think of O'Reilly as a resource for software developers and IT departments, our platform includes many other types of information. Answers are able to answer questions about topics like chemistry, biology, and climate change—whatever is on our platform. However, it differs from chat applications like ChatGPT in several ways. First, it is limited to questions and answers. Although it suggests follow-up questions, it is not a conversation. Each new question starts a new context. We believe that many companies experimenting with AI want to communicate for the sake of communicating, not to achieve a goal—perhaps with the goal of monopolizing the attention of their customers. We want our users to learn. We want our customers to solve their technical problems. Conversation for its own sake does not fit this use case. We want interactions to be short, direct and to the point.

Limiting Q&A responses also reduces misuse. It's hard to lead an AI system “off the rails” when you're limited to questions and answers. (Honeycomb, one of the first companies to integrate ChatGPT into a software product, created Similar decision.)

Unlike many AI-powered products, Answers will tell you when it doesn't actually have an answer. For example, if you ask him “Who won the World Series?” It will answer “I don't have enough information to answer this question.” If you ask a question that it cannot answer but to which our platform may have relevant information, it will point you to that information. This design decision was simple but incredibly important. Few AI systems will tell you that they can't answer the question, and that incompetence is a major source of deception, errors, and other types of misinformation. Most AI engines can't say “sorry, I don't know”. Our will and will.

Answers are always attributed to specific content, which allows us to compensate our talent and our partner publishers. Designing the compensation plan was an important part of the project. We're committed to treating authors fairly—we won't just create answers from their content. When a user asks a question, Answers generates a short answer and provides links to the resources from which they obtained the information. This data feeds into our compensation model, which is designed to be revenue neutral. It doesn't penalize our talent when we create answers from their content.

Designing answers is more complex than you might expect—and it's important for organizations embarking on AI projects to understand that “the simplest thing that could possibly work” probably won't work. From the beginning, we knew we couldn't just use a model like GPT or Gemini. Besides being error-prone, they have no mechanism to provide data about how they generated the answer, data we need as input to our compensation model. This immediately pushed us to the Recovery Augmented Generation Pattern (RAG), which provided a solution. With RAG, a program generates a prompt that contains both the question and the data needed to answer the question. That expanded signal is sent to the language model, which provides a response. We can compensate for our talent because we know what data was used to generate the answer.

Using RAG begs the question: Where do documents come from? Another AI model that accesses our platform's content database to generate “candidate” documents. Yet another model ranks the candidates, choosing those that seem most useful. And the third model reevaluates each candidate to make sure they are truly relevant and useful. Finally, the selected documents are trimmed to minimize content that is not relevant to the query. This process has two purposes: it reduces the amount of data sent to the model that answers the query and the illusion. It also reduces the required context. The more context required, the longer it takes to get the answer, and the more expensive it is to run the model. Most of the models we use are small open source models. They are fast, efficient and cheap.

In addition to reducing fraud and making it possible to attribute content to creators (and assign royalties from there), this design makes it easy to add new content. We are constantly adding new content to the platform: thousands of items every year. With a model like GPT, adding content would require a long and expensive training process. With RAG, adding content is trivial. When anything is added to the platform, it is added to the database from which relevant content is selected. This process is not computationally intensive and can happen almost instantaneously—in real time, as it were. Answers never lag behind the rest of the platform. Users will never see that “this model has only been trained on data up to July 2023.”

Answers is a product, but it's only one piece of the ecosystem of tools we're building. All of these tools are designed to deliver a learning experience: to help our users and our corporate clients develop the skills they need to stay relevant in a changing world. That's the goal—and it's also the key to building successful applications with generative AI. What is the real purpose? It's not about impressing your customers with your AI skills. This is some problem solving. In our case, the problem is helping students acquire new skills more effectively. Focus on that goal, not the AI. AI will be an important tool – perhaps the most important tool. But it is not an end in itself.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment