Start Strong, Finish Strong: The Two Points Where AI Projects Fall Apart

Alon Goren News

Start Strong, Finish Strong: The Two Points Where AI Projects Fall Apart
United States Latest News,United States Headlines
  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 259 sec. here
  • 6 min. at publisher
  • 📊 Quality Score:
  • News: 107%
  • Publisher: 59%

AI projects are breaking down at two critical junctures. Get either of these wrong, and even brilliant execution in between won't save you.

that IT spending will increase by 9.8% this year compared to 2024, driven by generative AI adoption. Yet, organizations are still facing roadblocks in driving measurable value. A recent IBMAt this point, AI has demonstrated its groundbreaking impact across every industry.

It’s not the technology that’s causing problems. AI projects are breaking down at two critical junctures. The first breakdown point is at inception: getting alignment on objectives and what success looks like. The second comes at the backend: building the systems to evaluate and monitor performance once you've deployed. Get either of these wrong, and even brilliant execution in between won't save you.Too often, vague mandates come down from the C-suite. Sometimes, it's generic: "Deploy AI." Sometimes, it's more prescriptive: "We need AI to help acquire customers." But it still lacks parameters and specifics. This is a recipe for failure. For the cynic, this sounds like out-of-touch executives falling for hype. But here's the catch: AI projects must solve core business problems to drive ROI, and executives know those needs better than anyone. When efforts are driven purely by technology stakeholders, projects become experiments that never reach production. Company execs need to work together with tech leaders to define goals, identify where AI can drive real business value, determine what’s feasible and marshal resources to get it done. It sounds simple enough. However, countless organizations are failing at this very first step. Here’s how business and tech leaders can determine if they’re on the same page and are prepared for success in AI. Before embarking on any serious projects, the two sides should be able to clearly answer these questions:2. Can that problem be better solved by traditional methods ?4. Do we have a way to accurately track and measure that value? When alignment is strong and objectives are clear, organizations enter the implementation phase with a shared definition of success. The middle phase of building, iterating and refining certainly has its challenges. But when teams have answered these four questions clearly, they create the foundation needed to navigate those challenges effectively.On the other end of the arc, there’s another major hurdle that enterprises often run into: testing and evaluating AI performance. AI behavior can vary unpredictably. Companies need robust validation frameworks, diverse test datasets and continual monitoring to ensure consistent, trustworthy results in real-world conditions. Those processes can become cumbersome at scale. The core challenge starts with how organizations approach testing. Most enterprises test AI systems for success, but rarely test for failure. A team builds an impressive demo that works for intended use cases. Leadership sees the potential. But no one has tested the edge cases: the scenarios that don't work, questions the system can't answer or inputs that produce unreliable outputs. When failures surface in production, organizations scramble to patch them reactively. Making this more complex is the nature of AI failures themselves. Traditional software systems fail spectacularly and obviously. We’re all familiar with blue screens of death or complete system crashes. AI systems fail subtly. Response quality drifts by 2%. Processing time increases from 1.5 seconds to three seconds. Answers become slightly less relevant but still plausible. These gradual degradations are difficult to catch with standard monitoring tools, and users often notice problems before the organization does. This drift happens for several reasons. AI models can change when providers update their endpoints. A prompt that produced excellent results last month might yield mediocre outputs today. In multi-agent systems, small errors accumulate across interactions until the system has lost the narrative thread. Effective AI evaluation requires monitoring qualitative aspects across every interaction: Is it fast enough? Appropriate for the context? Consistent with yesterday? What did it cost? Did users find it valuable? These aren't binary pass/fail metrics. They require ongoing assessment, more like managing analysts than checking if a server is running. This is why post-production monitoring is just as critical as upfront testing. Organizations that succeed with AI build continuous monitoring into their architecture from day one. They establish baseline performance metrics, track deviations over time and create feedback loops that catch quality issues before users need to report them. Without this ongoing vigilance, even well-designed AI systems will degrade over time.Recent surveys showing that enterprises are struggling with AI ROI shouldn’t be read as an indictment of the technology. AI is as impactful as we’ve been told, and it’s only getting more powerful. The problem is how organizations are preparing for and evaluating their implementations. Internal misalignment and testing are two critical hurdles, but they can be overcome. Enterprises that clearly align the C-suite and technology implementors at the beginning will create the foundation needed for success. Organizations that establish rigorous testing and evaluations further ensure that their objectives are being met months and years in the future. Get these two areas right, and you create the conditions for AI implementations that deliver real business value.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

ForbesTech /  🏆 318. in US

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Thousands of runners, joggers cross finish line at inaugural San Antonio MarathonThousands of runners, joggers cross finish line at inaugural San Antonio MarathonThousands of runners, joggers and walkers all came together to run the inaugural San Antonio Marathon on Sunday.
Read more »

Nike Cross Nationals: CBA claims podium, Haddonfield secures Top 10 finishNike Cross Nationals: CBA claims podium, Haddonfield secures Top 10 finishChristian Brothers finished third in the Nike Cross Nationals race after the action at Glendoveer Golf Course in Portland, OR
Read more »

Bessent says US 'going to finish the year' with 3% GDP growth despite government shutdownBessent says US 'going to finish the year' with 3% GDP growth despite government shutdownTreasury Secretary Scott Bessent predicts the U.S. will finish 2025 with 3% GDP growth despite economic volatility from tariffs and immigration policy changes.
Read more »

Freshman’s dramatic relay finish sparks No. 6 IHA swimming past No. 20 Passaic TechFreshman’s dramatic relay finish sparks No. 6 IHA swimming past No. 20 Passaic TechElle Mulder’s incredible debut steered Immaculate Heart girls swimming to a impressive win over Passaic Tech.
Read more »

5 Financial to-dos before the end of 20255 Financial to-dos before the end of 2025Here are some moves to help you finish the year strong financially.
Read more »

Copper hits record high near $11,800 amid strong start to 2025Copper hits record high near $11,800 amid strong start to 2025Copper started the week at a new record high of almost $11,800 per ton, up 33% on the start of the year. China impressed with good export figures: exports were almost 6% higher than last year, giving China its third-largest monthly trade surplus ever.
Read more »



Render Time: 2026-04-01 23:08:33