Instead of displaying links, Arc Search's “Browse for Me” feature reads the first handful of pages and summarizes them into a single, custom-built, Arc-formatted web page using large language models from OpenAI and others. Critics say that's a problem.
Current AI can already “read” documentation that isn’t part of its training set, actually. Bing Chat, for example, does websearches and bases its answers in part on the text of the pages it finds. I’ve got a local AI, GPT4All, that you can point at a directory full of documents and tell “include that in your context when answering questions.” So we’re we’re already getting there.
Getting there, but I can say from experience that it’s mostly useless with the current offerings. I’ve tried using GPT4 and Claude2 to give me answers for less-popular command line tools and Python modules by pointing them to complete docs, and I was not able to get meaningful answers. :(
Perhaps you could automate a more exhaustive fine-tuning of an LLM based on such material. I have not tried that, and I am not well-versed in the process.
What about Github Copilot? It has tons of material available for training. Of course, it’s not necessarily all bug-free or well written.