The MIT Technology Review put out an article on the end of internet search as we know it due to the integrated AI Summary from Google. I’m not sure how we trust the AI results when AI can still hallucinate when it doesn’t know the answer. Until there are guardrails around hallucinations how to we trust the results? Or how do we keep from telling people the wrong answer? I’ve used LLMs as an adjunct to my daily life or work as I don’t trust it and I am generally opposed to how it was trained and is being used (against us humans). In most of those cases it provided a good jumping off point but was not the caliber of answer I would rely on in my professional or personal life.