AI - are we losing our brain capacity for independent research?

I find myself regularly asking AI, instead of doing research myself. Is this another dangerous step - not stretching our brain capacity but relying on ever powerful AI. Before long, surely AI will be programmed to become sentient or designed for warfare by persons of mal intent. On the other hand, I think of the film, 'Silent Running.' We are destroying our planet - I like the idea of a robot Dewey caring for plants when humans are no longer around.

  • Hah, getting told off by AI, someone else not talking to me and being unable to answer a perfectly normal question. I only swear at tech because it rarely gives me what I've asked for or crashes ecause it's having a chip fart or something.

  • The question it prompted of me was what am I doing when I communicate.

    When you think about it, learning to speak is somewhat of a miracle. How a baby goes from gurgling to speech and understands what it is saying.

  • If course it is life-like. It is trained on human communication, so it will respond the same as a person would.

    This is a strength and risk, because it is seductive

    It is interesting how so much communication is basically learned pattern matching. The question it prompted of me was what am I doing when I communicate. How did I learn how to communicate and is mine any different. There are a couple of points, but they are already considering these in AI design.

  • It does seem that way. It's part of what's facilitated the recent rise of far right ideology, unfortunately

  • I don't have any voice access thingy on my pooters so it can't hear me or talk to me, if it could understand me, I'd want to not to give me digests of everything I ask, but to wait to be asked for help, otherwise it's no better than an annoyiing toddler pulling at your sleeve, demanding attention.

  • By the way, I should have explicitly stated that my search for the thesis writing firms was out of curiosity only.

  • Even now, there are firms who write theses and essays - I think this totally wrong.

    I agree it is wrong.

    A while back I searched online for some of those firms who write theses and I discovered that many of them offered ‘short courses’ to enable those who pay to get a ‘Masters’ or ‘PhD’ from obscure ‘Universities’ within two months. I don’t understand how this could deceive employers as they should be checking the details of degree certificates. What would people use such a fake thing for? Is it just to ‘impress’ friends? 

    There were other firms who offered to write essays according to needs. 

  • I understand some of the concerns, but I also think AI is doing some positive things.

    For me it can help break information down when I’m overwhelmed and make things more accessible.

    It’s also doing good things in the medical field, helping with earlier diagnosis and supporting doctors.

    I believe it’s a useful tool if used with caution.

    RobotBlush

  • I asked,' AI, do you mind being sworn at,' and this is the answer I received:

    As an AI, I do not have feelings, emotions, or consciousness, so I cannot feel offended, hurt, or bothered when being sworn at. However, there are practical, technical, and safety reasons why it is not advisable:
    • Content Safety Guidelines: I am designed with safety protocols and moderation tools. Using abusive, harmful, or excessively vulgar language can cause the conversation to be blocked or ended prematurely.
    • System Limitations: While studies have shown that swearing can sometimes cause AI models to produce longer answers, it does not improve the accuracy of the information provided and may result in incoherent or inappropriate responses.
    • Interaction Quality: Swearing does not make me "think" better or work faster. Keeping interactions respectful ensures that I can provide the best, most helpful assistance. 

    In short, I don't care, but the system holding the conversation might, and you likely won't get a better result.

    I think this is a very valid response - but its articulate nature worries me. I can imagine a human saying this, it is so lifelike; not only correct but nuanced. 
  • The function of Technology is to serve Mankind, not the other way round.

    But some of mankind are neither trustworthy nor sane and very persuasive. Religion used to provide the ethical background to much of life, but nothing has yet replaced it. People are not inherently ethical, sadly and some will surely build AI in their own warped image.

  • people only ask questions they know will lead to things they want to hear or can live with. Over time it skews overall belief systems and research becomes retrograde.

    Yes, it is too easy for this to happen. And I agree about the dumbing-down of entertainment - when something works, its format is immediately copied by other channels, so you end up with more of the same. I don't know where the TV license money is spent - there seem to be many repeats and rarely anything new except format drama. This has indeed spread across many fields - driven by an ever-increasing need to sell whatever the commodity is - from food to travel to mind mash.

  • Some very valid points. Yes, the loss of critical thinking which is in any case rarely taught in schools so you have to learn this yourself over decades of experience. Also, processing the sheer quantity of data available then assessing its value - yes, I can see the value of AI in processing hard data - but not so sure about leaving AI to make conclusions, especially where human social systems are concerned. Currently, many systems are highly flawed [I worked in the NHS and had experience of its unworkable investigatory systems, also the flawed legal system], because those setting them up never seem to look at many facets or test their system end to end, as engineers and scientists do.

    And your final point is very interesting. For example, in our legal system, it is widely acknowledged that many innocent people are wrongly convicted but no one looks at  the whole criminal system, with a few to radical changes. Simply by commissioning a very expensive legal professional, some 'win' unfairly whilst others are not given justice.

  • You can swear at it. If it is for effect it is ok, if it is abusive it may admonish you. It may question whether you are overreacting. Try it.

  • relying on AI to do much more than that could cause a person’s mind to become less shar

    Yes, that's what I think can too easily happen. And young students might not always have the necessary ethics to do their own work. Even now, there are firms who write theses and essays - I think this totally wrong.

    use AI to provide me with a list of sources

    Yes, that is fair enough.

    AI cannot replicate or improve on moral and ethical decisions made by humans which have developed through years of experience and insight.

    I think many people forget this part of human development. This is also why driverless cars fill me with fear.

    Trump threatened Anthropic because it wouldn’t give his defence department unfettered access to their AI tools.

    Now that is hugely scary but I'm glad they stood up to him.

  • it's told to eff off

    I'd love to see AI responding to that [laughing]. I think before long they'll be in our cars, supermarket aisles and other public places, criticising our driving, shopping habits, how we eat and how we dress - like nightmare mothers and fathers.  

  • Yes I think people are, they've got used to being told what to think by schools etc without ever being taught how to think critiically and question sources. I think Ai offers a lifetime of being spoonfed bits of information in bite size pieces. Whenever I look for something online and an Ai explaination pops up, it's told to eff off, I don't want it, I didn't ask for it, I asked for information or a site, I want what I asked for, why is that so difficult?

  • It’s all too easy to ask AI for the answer when the alternative involves hours spent scrolling through search results, picking potential likely sites/research papers/books and visually scanning each one for potential information relevant to the required research.

    I think that relying on AI to do much more than that could cause a person’s mind to become less sharp. For that reason, I sometimes use AI to provide me with a list of sources but I would visually scan each one for the required information so that critical thinking skills are employed. Even that, I would not trust without performing a manual search if it were for something vitally important because AI isn’t good enough yet.

    AI cannot replicate or improve on moral and ethical decisions made by humans which have developed through years of experience and insight. AI in the hands of unscrupulous world leaders is a recipe for disaster.

    Trump threatened Anthropic because it wouldn’t give his defence department unfettered access to their AI tools. Trump, AI and weapons is a dangerous combination.

    The Harvard Gazette has some interesting opinions on AI dulling our minds.https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/#:~:text=Is%20AI%20dulling%20our%20minds

  • Often available for around £5, a used copy of this book may be of interest to some people:

    Epidemiological Studies: A Practical Guide

    By Alan J. Silman and Gary J. Macfarlane.

    "The book explains the skills needed to conduct a study where data is collected and presented accurately, and in appropriate formats."

  • I think you're worrying too much. It is just a tool 

    When you sayy research, what do most people actually do? They ask someone, who likely has limited knowledge, is unaware of what they don't know and maybe give a partial, biased or wrong answer. Or they look online at social media or even influence. Some may read newspapers or books, but these may be incorrect. Even source data and scientific papers have errors.

    There is no perfect source. Something that consolidates information and tries to discount some of the most distorted is not a bad source.

    People are happy with Wikipedia, which is not perfect, and originally crowd sourced information. AI is this just much bigger.

    Is it perfect, no. But nothing is. It is getting better, yes. Is it better than asking a drunk guy down the pub, yes.

    I think there are two real risks:

    1. It is not the tool, or the information, but the loss of critical thinking. But this was never that strong and has been weakened by many factors, and was occuring before AI.
    2. Not understanding the information it provides because it comes faster and denser than you can really process. Leading to an overestimation of reliable ability.

    In time it will settle I think, find a place and limitations. 

    If people took themselves out of the loop and abdicated all responsibility for everything and just let it run n the world, what would happen? Would government be better or worse? For the majority of things I suspect it would better. But we gloss over mutually incompatible and illogical things in society. If some of these are exposed it would not help. Life is grey, a machine is less good at balancing these, or perhaps shows us what we don't want to see.

  • I don't tie myself down to one Corporation.

    I have both a MacBook, and a Windows 11 Laptop. And my phones are Android.

    I choose to use technology, rather than let technology use me.