Get A Free SMS Marketing Plan — Customized For Your Business

Learn More
Generative AI isn’t trying to kill you. It just doesn’t know any better

Generative AI isn’t trying to kill you.

2 min read

You’ve probably seen a few scary headlines about how the output from some AI systems can be offensive or off-topic. And it’s true: An open-ended generative AI platform–one trained on billions of data points and relatively unconstrained in its output–can come up with results that are interesting and unexpected, but also potentially harmful.

Never the same river twice

Typical generative AI systems come up with plausible responses to user input with wide latitude and a scarcely bounded pool of background data. And that’s OK; for those systems, the unbounded viewpoint is part of their purpose. The answers they provide to user queries are always changing as they incorporate new sources of information–often unvetted or clashing ones. Their source data is simply expanding far faster than it can be checked for accuracy or consistency, and the results are highly unpredictable. In short: today’s generative systems typically favor quantity over quality, and their results vary wildly.

Tightly constrained conversational AI systems like Meera take a very different approach. because they incorporate regulation that guides every interaction. For Meera, the focus isn’t simply on expanding the pool of possible answers. Instead, the most important job is understanding user input and discerning intent, then supplying appropriate, human-friendly answers from a tightly bounded pool of information. If a potential student inquires about a school’s hours of operations, for instance, Meera can understand the nature of the question, and reply with an answer that’s been prepared, tested, and personalized.


If you want to communicate with students with automated texting, the automation is only a means to an end: You still need it to be your own messaging that reaches them.

That means you can’t simply plug a general-purpose AI engine into a texting automation system and expect it to reply to inquiries with the answers you’d hoped for. Instead, you need a system whose memory banks are populated with the right base of information.

That’s simply the same kind of information that you’d communicate over the phone, in an email, or in person, but converted into succinct, consistent responses suitable for friendly on-screen display. It’s an obvious point, maybe: the responses you’ll get out of a system are determined by the quality of its input, and that’s a good thing.

University of Silicon Valley-compress

Beyond this, remember that AI is only one aspect of intelligently automating communications. A general-purpose AI system doesn’t care about important human experience factors like reaching people at the time of day that fits their schedule best or knowing how many messages are simply too many.

You can’t afford to ignore automation. But if you’ve read this far, I hope you’ll take away this: To make automation work, you’ve got to build in the right kind of training, and persistently test the system from a user’s point of view, to make sure you’re delivering the messages you think you are, and in the right way.

And when it’s time to talk with a person, no AI system will do.