reader comments
35 with 20 posters participating
Over the past year, machine learning and artificial intelligence technology have made significant strides. Specialized algorithms, including OpenAI’s DALL-E, have demonstrated the ability to generate images based on text prompts with increasing canniness. Natural language processing (NLP) systems have grown closer to approximating human writing and text. And some people even think that an AI has attained sentience. (Spoiler alert: It has not.)
And as Ars’ Matt Ford recently pointed out here, artificial intelligence may be artificial, but it’s not “intelligence”—and it certainly isn’t magic. What we call “AI” is dependent upon the construction of models from data using statistical approaches developed by flesh-and-blood humans, and it can fail just as spectacularly as it succeeds. Build a model from bad data and you get bad predictions and bad output—just ask the developers of Microsoft’s Tay Twitterbot about that.
For a much less spectacular failure, just look to our back pages. Readers who have been with us for a while, or at least since the summer of 2021, will remember that time we tried to use machine learning to do some analysis—and didn’t exactly succeed. (“It turns out ‘data-driven’ is not just a joke or a buzzword,” said Amazon Web Services Senior Product Manager Danny Smith when we checked in with him for some advice. “‘Data-driven’ is a reality for machine learning or data science projects!”) But we learned a lot, and the biggest lesson was that machine learning succeeds only when you ask the right questions of the right data with the right tool.
John Henry-esque test: to find out whether some of these no-code-required tools could outperform a code-based approach, or at least deliver results that were accurate enough to make decisions at a lower cost than a data scientist’s billable hours. But before we could do that, we needed the right data—and the right question.