4 min read

Cresta, Upwork, Discord, and Kustomer on Humans and Agentic AI

Cresta, Upwork, Discord, and Kustomer on Humans and Agentic AI
Cresta, Upwork, Discord, and Kustomer on Humans and Agentic AI
7:46

At the AI Agent Conference in New York, one of the most philosophically grounded panels of the two-day event brought together Ping Wu of Cresta, Andrew Rabinovich of Upwork, Clint Smith of Discord, and Brad Birnbaum of Kustomer. The session was framed around a question most AI conferences sidestep: what does the relationship between humans and AI systems actually look like when the systems are running in production, at scale, on real workflows?

The answer from all four panelists was consistent. The future is not humans versus AI. It's humans and AI — and getting that collaboration right is harder than most organizations are planning for.

The Magic Is Real, But Only When Grounded in Human Expertise

Andrew Rabinovich opened with something that landed well because it was honest in both directions. AI has crossed a threshold. The capabilities are genuinely different from where they were two years ago. "Things are becoming so efficient and productive that it's no longer science fiction. For years, it didn't work. Now everything is working like magic."

But the second half of that statement is the part that matters: "It's only magical if it gets grounded in human experience and expertise."

AI systems don't become useful in isolation. They become useful when they're connected to real workflows, trained on domain-specific operational context, and paired with human judgment that knows what good output looks like. The organizations discovering this the hard way are the ones that deployed capable models into workflows without doing the integration and grounding work first.

Upwork Is Selling Outcomes, Not Labor

One of the most strategically significant shifts described in the session came from Rabinovich's description of what Upwork is becoming. The platform is moving from a marketplace where clients hire freelancers to one where clients describe what they want and AI agents orchestrate the delivery — often blending human and automated work in ways the client doesn't see or need to manage.

"Clients simply speak to the agent and ask for the outcome. The platform is transforming into a work delivery platform."

The unit being sold is shifting from hours of labor to delivered outcomes. That's not a minor product update. It's a fundamental change to what the business is. And the implications extend far beyond Upwork — any professional services model still organized around billing for time is looking at a version of this transition.

Autonomous Customer Service Hits a Wall at 78%

Brad Birnbaum's section on AI customer service included the most practically useful data point of the session. Kustomer has deployed AI-powered support at scale. The results are real — large support volumes handled, enterprise-quality service delivered to smaller businesses, interactions monitored and augmented in real time. And then this: "We haven't gone beyond about 78% resolution."

That ceiling matters. The final 20-plus percent of customer support interactions are the ones that require empathy, nuanced judgment, emotional intelligence, and genuine ambiguity handling. AI systems don't have those capabilities reliably, and the consequences of getting them wrong in a support context are high enough that human escalation remains necessary.

"Transactional interactions should be automated. Humans should be freed from repetitive work. Empathy remains fundamentally human."

The practical implication: organizations deploying AI for customer service should design explicitly for that ceiling rather than assuming it will move. Build the human escalation paths carefully. They're not a fallback — they're a permanent feature of the architecture.

Humans Are Best at the Beginning and the End

One of the sharpest conceptual frames of the session came from a breakdown of what intelligence actually involves: deciding what to do, doing the work, and evaluating the result. The panel's argument about where humans and AI fit in that structure was precise.

"Humans are really good at deciding what to do. Humans are really good at evaluating the result. Machines are narrowing the middle layer."

AI excels at execution — scaling repetitive work, processing volume, automating mechanical tasks. The beginning and end of any meaningful workflow — defining objectives and evaluating outcomes — remain human strengths. The practical design implication: organizations should build AI systems that expand human capacity in the execution layer while preserving and elevating human involvement in goal-setting and evaluation. That's a different organizational structure than most companies currently have.

New call-to-action

Context-Aware Agents Are the Next Frontier

Ping Wu's contribution to the session focused on where agent architecture is heading: systems that work alongside humans in real time, embedded directly into the browser and operational workflow, observing work as it happens and augmenting it continuously.

"The AI works in front of humans in the browser. The human remains in control."

The use cases he described — contact centers, healthcare workflows, claims processing, administrative systems — share a common structure: repetitive, high-volume work where real-time AI assistance can dramatically improve both speed and quality without removing the human from the loop. The agent doesn't replace the worker. It removes the mechanical burden so the worker can focus on what requires judgment.

Nobody on This Panel Thinks AGI Is Close

The session included a segment on unpopular opinions that produced some of the most direct quotes of the conference. On AGI timelines specifically, Rabinovich was unambiguous: "There is no magic. We are far away from AGI."

The panel's technical reasoning: current models remix and optimize from training data. They don't discover genuinely novel understanding. The parameter scaling and neuron count comparisons that fuel AGI timelines don't account for the architectural breakthroughs that would actually be required. Current progress is impressive engineering — not evidence of approaching human-equivalent intelligence.

Ping Wu added context on workforce disruption that cut against the dominant narrative: "A lot of AI layoffs are actually corrections from overhiring. This transformation takes decades."

The historical analogy the panel reached for — steam engines, electricity, industrial transformation — is apt. These technology transitions take much longer to fully unfold than the hype cycle suggests. Societal and organizational change doesn't happen overnight, and the industries most affected often aren't the ones that move fastest initially.

The Hardest Problems Are Organizational, Not Technical

The panel closed by naming the unsolved challenges each speaker is actually losing sleep over. The list was notable for how little of it was technical: how do humans become comfortable working alongside agents, how do entire companies transform into AI-first organizations, how do you maintain context integrity across complex multi-step workflows.

"How do humans become comfortable working with agents? How do we transform entire companies into AI-first organizations?"

The technology is moving faster than the organizations deploying it. The constraint isn't model capability. It's trust, adoption, change management, and the organizational structures required to make human-AI collaboration actually work at scale.


This session was presented at the AI Agent Conference 2026 in New York. Panelists represented Cresta, Upwork, Discord, and Kustomer.