By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Articles

The 3 Most Important AI Trends for Data and Analytics Professionals to Watch in 2023

Many of the AI changes coming will make it easier to use, more helpful to do our jobs, and safer all around.

AI has disrupted almost every market where there is good quality data to be had. After all, learning and prediction are powerful in almost any setting. The hardest part remains ensuring that enterprise infrastructure -- digital and mechanical processes and products -- can deliver large amounts of high-quality data.

For Further Reading:

Common SQL, Open Lakehouses, and Open Source Managed Services: What Will Be Cool in 2023

Businesses Will Get Real with AI in 2023

The Outlook for Data Warehouses in 2023: Hyperscale Data Analysis at a Cost Advantage

Fortunately, many have overcome this challenge by accelerating the pace of AI innovation, and enterprises are poised to continue on that trajectory in 2023. We’re already starting to see it with the debut of powerful products that use generative AI to create new artifacts that have never been seen before. With DALL-E, for example, images and videos that are artificial, yet nonetheless seem plausible, are easy to create with a simple text prompt.

With this excitement comes new implications that range from negative to downright dangerous. Deep fakes are a scary reminder that our legal and ethical frameworks regarding AI must adapt to the fast-evolving technology and tools on the market. We’ve already seen how AI combined with social media can present thorny issues for detection and content moderation.

It’s not all doom and gloom, though. In fact, many of the AI changes coming will make it easier to use, more helpful to do our jobs, and safer all around. Here are three key AI trends data and analytics professionals should pay attention to in 2023:

Trend #1: Accessibility to AI will broaden

You may not see it, but AI is becoming a fundamental differentiator for cloud vendors. For example, DeepMind Alpha Tensor uses generative AI to optimize other AI code to execute on a target server, given the CPU and GPU hardware on which it will run, to achieve maximum efficiency. It achieves this by discovering novel, efficient, and proven algorithms for tasks such as matrix multiplication. This will allow cloud vendors to further compete on cost or performance, marking a huge step forward.

Although this is more of an “AI under the hood” view, it’s important for two reasons. First, whether they know it or not, more people will be using AI than ever before, putting it in the hands of the masses. Second, we’re starting to see real, bottom-line business drivers for AI, which will trickle down from the major cloud vendors to smaller tech players.

Trend #2: Generative AI will become commercialized

We’ll start to see more enterprise products that use generative AI come to market in 2023, delivering value in unexpected domains, such as speech. The space is exciting because there are many, largely untapped, but very valuable use cases. In gaming, a user can opt to sound like their on-screen character. In a virtual meeting, a person with a speech impairment can make their voice easier to understand, enabling them to focus on their work contributions rather than potential misunderstandings.

Unlike DALL-E, which can be interesting but only somewhat useful, speech-to-speech (S2S) technology has the potential to empower participants to enhance conversations in real-time and at scale. For customer service, this can be a game changer. For example, contact center agents can use generative AI to clearly understand callers from anywhere in the world, helping them resolve problems fast and feel more confident in their roles.

Trend #3: The legal and ethical underbelly of AI will be exposed

With all its glory and potential, there are still complex legal and ethical AI issues to resolve. The recent Copilot lawsuit is a prime example of many more cases that are sure to come. In that case, GitHub introduced an AI-powered coding assistant last year that was trained on vast amounts of open source code -- including those with licenses that require crediting its creators. As you may guess, this credit was not given, violating copyright law.

This is the first class-action suit in the U.S. against an AI system, but it’s just the beginning. Technology may be leaps ahead of the legal industry, but as AI embeds itself further into our everyday lives, companies and governments will begin to drop the hammer when it comes to safe, responsible practices. We will also see more transparency around cases such as this, and learn how to avoid these missteps for future deployments.

A Final Word

Despite AI looking more like a moving target for lawyers and ethicists, great progress will come from discussions -- and consequences -- around responsible uses of AI. This is a welcome trend, because we’ll need to monitor AI more closely with the commercialization of enterprise products and broader accessibility to them. We’re at an inflection point, and it will be interesting to see where AI takes us in the New Year.

About the Author

Yishay Carmiel is the founder and CEO of Meaning. He has a successful track record and vast experience building, launching, and growing disruptive AI-driven, revenue-generating products and services across startups and Fortune 500 companies. Carmiel is the author of numerous research papers in conversational artificial intelligence, machine intelligence, and deep learning and has been recognized as a leading global expert in the field. In 2017, Yishay was chosen by SpeechTek Magazine as Speech Luminary of the Year. You can reach the author via email.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.