Like most people with an internet connection, I’m used to turning to Google for just about everything. From recipes and cold remedies to recollecting 90s radio hits, I pick up my phone or fire up my laptop and use Google to find answers. But within the last few months, I’ve found myself increasingly using AI tools like ChatGPT, Meta AI, Claude, and Google’s own Gemini for day-to-day answers.
I enjoy the personal aspect of receiving clear, conversational responses from these AI models, especially when compared to sifting through endless links. I prefer one direct answer from a chatbot rather than a string of Google links that I will have to search through again to find what I need. I know I’m not alone in my endeavors to get answers tailored to me, especially with OpenAI launching SearchGPT later this year.
The value of AI in search
When I first started using AI, I was impressed by how effortlessly I could ask ChatGPT or Claude nuanced questions and get instant, thorough answers without wading through advertisements or SEO-optimized pages.
Meta AI, with its integration into platforms like Instagram and Facebook, also proved highly intuitive without sponsored or unrelated links. Even Google’s Gemini AI has made strides in delivering responses that feel conversational rather than coldly algorithmic.
Yet, this AI-first approach isn’t without its drawbacks. One major criticism is AI’s tendency to hallucinate — giving confidently wrong or biased information. Although I usually know when the AI is wrong, particularly if I already have an inkling about the topic, I don’t always.
This can be a real problem if I am using a chatbot for answers in the same way I use Google. Yes, Google can also return inaccurate results, but AI models often don’t provide clear citations, making it harder to verify the information. For instance, just today I called out ChatGPT for giving me a bit of wrong information about tech, and Meta AI has been criticized for delivering contextually incorrect answers.
Unfortunately, AI models tend to function in a closed-loop ecosystem, which narrows the diversity of content I’d otherwise discover through Google’s endless search results. Google's traditional search, with its links and varied sources, provides a level of transparency and cross-referencing that remains the victor. It’s also concerning that as AI models become dominant for searches, they may deprioritize original content creators, as there’s less incentive to click on articles or visit websites when you get answers directly from the AI.
Of course, privacy is another growing concern. While Google has been criticized for its data-harvesting practices, AI models require immense data to train and improve. This includes the content of user queries, leading to potential privacy issues. ChatGPT, Meta AI and other AI model’s reliance on this data collection raises questions about how much information they’re gathering and whether AI models could pose an even greater threat to privacy than Google.
Getting personal with AI
In terms of customization, these AI models often provide hyper-personalized responses based on past interactions, which I enjoy, but this can create a bubble effect — limiting exposure to new ideas or alternative perspectives.
There have been times when I’ve asked ChatGPT a question about a topic, then another completely different topic and it will somehow combine answers or skip the second question all together. Google's traditional search, with its more varied returns, still feels like a broader look at the web.
In short, AI has revolutionized the way I search for and receive information, often outpacing Google in convenience and speed. However, the risks of misinformation, privacy concerns, and a shrinking pool of content sources remind me that AI isn’t perfect. While I’ve all but given up on Google, I’m cautious about relying too heavily on any one tool — AI included — in this rapidly evolving tech landscape.