Is AI Good or bad?

Is AI a good idea or a bad idea? 

  • Even with AI available to everyone, it doesn’t level the field so much as it raises it. Students with strong intellectual curiosity, critical thinking, and discernment will still get the most out of it. Those are the same qualities that make someone an A student in the first place.

    AI can suggest ideas, rephrase things, or help you understand a concept — but it can’t tell if something is good. It doesn’t know whether an argument is persuasive, whether a reference is relevant, or whether the tone is appropriate for the context. That still takes judgment, and judgment is earned through learning, practice, and genuine interest in the subject.

    If everyone has access, the most capable, curious, and engaged students still have the edge. It's not that AI hands out top grades. It's more like it gives you a sharper chisel — but you still need to be the sculptor.

  • I agree — AI isn’t inherently good or bad.

    It is also not truly intelligent. It doesn’t understand or reason, It predicts and generates based on patterns in vast datasets. Still, it’s highly sophisticated and evolving fast, with wide applications — from medical research and co-pilot coding to music, image, and video generation, and even decoding animal communication like whale codas.(CETI/language galaxy)

    In healthcare, AI is already proving valuable. It helps with early detection, supports overstretched systems, and assists specialists in screening tasks with impressive speed and accuracy.

    There are real risks too. These include -high energy use, job displacement, copyright infringement, fraud, impersonation, propaganda, and democracy manipulation. AI can also produce false but confident outputs — often called hallucinations — and many models lack transparency. Bias in training data can lead to skewed or unfair results, especially for underrepresented groups. 

    AI gives very powerful tools to millions around the world, which is a huge benefi?  But it also floods the internet with low-quality, repetitive content — websites, music, images, and articles — devaluing original human work. This especially affects writers, and creatives. 

    I’ve worked extensively with language models, image generators, and music tools. It’s easy to get generic output, but creating something original or meaningful takes time, effort, and skill, especially in prompt craft. 

    A growing concern is that AI-generated content is now being used to train future models. This creates feedback loops that reduce quality over time. It’s a form of data contamination that can weaken accuracy, diversity, and creativity. Like technical debt in code, it introduces a kind of data debt — where short-term gains lead to long-term fragility. As this builds up, models may become harder to trust, improve, or audit.

    One of the most exciting uses for me is autism support. I’ve trained a text-to-speech model on my own voice, which helps me communicate more naturally — and it can even sing. I’m also interested in developing an ASD filter —. With enough care and input, ASD-focused models could lead to real breakthroughs in understanding and accessibility?

    These are my words and ideas, cleaned up by ChatGPT — which you can usually spot by its fondness for long-form em dashes Slight smile

  • Depends. It is capable of some incredible things but is still a double edged sword like any other form of technology.

  • I see this question in different ways. 

    There's the developers of AI, there's the AI themselves, and then there's the users of AI.

    And not only that, but there's also the database that the AI draws information from. 

    All of these have the chance of going bad. 

    I personally think AI is pretty neat, but I can also see how easily it can turn into a bad thing in the wrong hands, or if AI gains autonomy.

    I'm falling asleep so I can't type much more than this. 

  • I rather think that it might be very bad in the long run by stultifying human creativity and innovation. I think AI can produce highly polished turds, but not creative gems.

    For instance feed AI a lot of Picasso's work and it would be able to create a novel simulacrum of a Picasso piece. However, Picasso produced his art out of his lived experience and all the non-Picasso art that had gone before. AI could not do anything comparable.

  • I mean I can't speak for everyone but for me its been really helpful. I have a lot of questions about a lot of things and AI stuff is good for explaining answers in a way I understand. Also, I like to keep up with the news but it also gives me a lot of anxiety and so I used AI to update me in a realistic and non anxiety inducing way. It can also be used for fun things such as editing dogs into funny scenarios. I guess as others have said it does come with its risks, people could abuse it, rely to much on it etc. But for me personally, so far its been a good thing. 

  • It is a tool, like a phone, or a hammer, or a car.

    I don't think it is intrinsically good or bad. It is down to how it is used.

    There will be a transitional period while people work out what it is good for. Don't believe all the hype. I think it will be found to have shortcomings, but be useful within constraints.

  • I like this response 

  • As with most technology, it depends on what it's being used for. In the case of AI, it also depends on what education/input it has received--kinda like people in that regard.  

  • AI is antichrist

  • AI is neither good or bad. The issue is how is it used? I was chatting with a friend about his son's experience at Essex Uni.reading politics. Seems a lot of students use AI to write well crafted essays and get top marks when they don’t actually do the work.

  • I think they already are. I think when mixed with robotics it starts getting scary, I hate the thought of androids, probably because masks, puppets, clowns and stuff like that really freak me out, my immediate instinct is to lash out, I think I'd end up with more than the usual breakage that inevitably ensues when I'm around things with silicon chips, the android would probably end up with bits broken and missing.

    I don't think humanity is wise enough yet to have anything like the tech we do.

    I don't like the sort of AI assisted customer (dis)service bots we have now, I'm trying really hard to think of anything thats been made better by it? Maybe some of the scanning techniques in medicine and archaeology.

    I don't think we've anywhere near enough law and regulation around it, we don't have enough for the horrible uses the tech we have now, let alone in the future.

  • This would be a bit like asking are people good or bad I think. You could answer either way, or both or neither. I don't think there is a binary choice here. Regardless, it was probably inevitable given how technology progresses through history, and the best intentions for any invention can always be subverted for less desirable outcomes.