Instead of displaying links, Arc Search's “Browse for Me” feature reads the first handful of pages and summarizes them into a single, custom-built, Arc-formatted web page using large language models from OpenAI and others. Critics say that's a problem.
This is what I wondered about a few months ago when people were saying that ChatGPT was a ‘google killer’. So we just have ‘AI’ read websites and sum them up, vs. visiting websites? Why would anyone bother putting information on a website at that point?
We are barreling towards this issue. StackOverflow for example has crashing viewer numbers. But an AI isn’t going to help users navigate and figure out a new python library for example, without data to train on. I’ve already had AIs straight up hallucinate about functions in R that actually don’t exist. It seems to happen primarily in the newer libraries, probably with fewer posts on stackexchange about them
Current AI will not. Future AI should be able to as long as there is accurate documentation. This is the natural direction for advancement. The only way it doesn’t happen is if we’ve truly hit the plateau already, and that seems very unlikely. GPT-4 is going to look like a cheap toy in a few years, most likely.
And if the AI researchers can’t crack that nut fast enough, then API developers will write more machine-friendly documentation and training functions. It could be as ubiquitous as unit testing.
Current AI can already “read” documentation that isn’t part of its training set, actually. Bing Chat, for example, does websearches and bases its answers in part on the text of the pages it finds. I’ve got a local AI, GPT4All, that you can point at a directory full of documents and tell “include that in your context when answering questions.” So we’re we’re already getting there.
Getting there, but I can say from experience that it’s mostly useless with the current offerings. I’ve tried using GPT4 and Claude2 to give me answers for less-popular command line tools and Python modules by pointing them to complete docs, and I was not able to get meaningful answers. :(
Perhaps you could automate a more exhaustive fine-tuning of an LLM based on such material. I have not tried that, and I am not well-versed in the process.
What about Github Copilot? It has tons of material available for training. Of course, it’s not necessarily all bug-free or well written.