Teaching AI and Software Development at Chulalongkorn University: A Two-Hour Conversation That Went Well Beyond the Slides

View in another language:
Teaching AI and Software Development at Chulalongkorn University: A Two-Hour Conversation That Went Well Beyond the Slides
Categories
Technologies
Author

Benoit Schneider

Managing Technical Director
Date

Last week I had the privilege of stepping into a classroom at Chulalongkorn University as a visiting lecturer — and walking out two hours later having had stimulating professional conversations with the professors and students alike.

The Faculty of Integrated Innovation

University building with modern architectural design.

Chulalongkorn University needs little introduction in Thailand. It’s the country’s oldest and most prestigious university, and its Faculty of Integrated Innovation (IntIn) — formerly known as the College of Innovation — is one of the most forward-thinking academic environments in the region. The faculty sits at the crossroads of technology, design, business, and social science, training a generation of graduates who are expected to work across disciplines rather than within the walls of a single one. 

The students who attended my lecture were not IT students, but they will inevitably use software, and many will likely participate in building some. For a topic like AI integration in software development, it’s exactly the right audience: students who need to understand technology not just as practitioners, but as future leaders, entrepreneurs, and decision-makers.

An Old Colleague, a New Full-Time Calling

The invitation came from a familiar face. The professor who reached out to me had, a few years ago (before AI chatbots or vibe coding were even a thing), been the CTO of a startup project we worked on together at Outsourcify.

During that engagement, we spent several intense weeks side by side — running workshops, mapping out architecture, debating technical decisions, iterating through designs. He was already teaching part-time at the university back then, and now academia is his full-time focus. He asked me to speak as a lecturer in his course on AI and software development, and I didn’t hesitate.

The Classroom

Lecture on AI Integration in Software Development

Chulalongkorn University is vast, with multiple faculties located right in the center of Bangkok, between BTS stations Siam and National Stadium to the North and the MRT line along the Lumpini park to the South.

The session drew around 50 students, with a mix of energy levels typical of a morning lecture, some leaning forward from the first minute, a few warming up gradually. What made it especially interesting was the presence of two guest professors who had come specifically to attend the talk. That changes the dynamic in the room. When your audience includes academics who have thought deeply about some of the same questions you’re addressing, you feel it,  in the quality of the questions, in the subtle nods and furrowed brows, in the way the discussion starts to stretch beyond the prepared material.

What the Lecture Covered

The talk was built around a core argument I’ve come to hold firmly after a decade of running a software agency: AI has lowered the barrier to writing code, but it has not lowered the barrier to building good software — and those are two very different things.

I had prepared around 20 slides and detailed notes, but as the discussion evolved and questions were asked, I found myself moving beyond them and speaking more freely.

We started with first principles. Software is not code. Software is a process that begins with a real-world problem, moves through user research, trade-off decisions, and team collaboration, and only then arrives at implementation. Code is the output of that process, not the starting point. 

I’ve seen too many projects fail — not because the code was bad, but because the wrong problem was being solved, or the communication between business and technical stakeholders broke down. AI doesn’t fix either of those.

From there, we walked through the evolution of AI-assisted development over the last five years: from GitHub Copilot’s intelligent autocomplete in 2022, through the explosion of consumer vibe coding platforms in 2024, to the current hybrid phase in 2026 where the industry is finding a sustainable balance between AI speed and engineering discipline.

We looked honestly at where vibe coding — the practice of building software through natural language conversation with an AI — genuinely shines: ideation, prototyping, proving a concept, getting fast feedback from stakeholders. And we looked equally honestly at where it breaks down: context drift over long sessions, architectural chaos, features that were hallucinated rather than requested. I shared a story about a client who came to us with a vibe-coded application that looked polished on the surface but had a broken database structure, no real security layer, and a tangle of logic underneath that was cheaper to rebuild from scratch than to fix.

The second half of the lecture focused on what’s emerged in response: spec-driven development, where you write a clear specification before a line of code is generated, and more sophisticated multi-agent frameworks like BMAD (Breakthrough Method for Agile AI-Driven Development), which simulate a full software team — analyst, product manager, architect, developer, QA — using different AI agents in structured roles. The closing message was directed particularly at students who aren’t planning to become developers: the most valuable skills in the AI era aren’t about writing code, they’re about defining problems clearly, communicating precisely, and bridging business needs with technical execution.

When the Questions Got Philosophical

The planned lecture was 60–65 minutes. We ran for nearly two hours.

The student questions were thoughtful — mostly practical, focused on tools, workflows, and career implications. But it was the professors in the room who pushed things into deeper territory, and I appreciated it.

One question in particular stood out, and it came from one of the students. The framing was this: when the first automobiles were introduced in the early 20th century, horses and cars coexisted for a period of time as complementary modes of transport. But eventually, cars proved more efficient, scalable, and economically dominant. In most practical domains, horses were displaced.

Could the same happen with human intelligence? Could AI reach a threshold where the performance gap becomes so wide that human cognitive contribution becomes, in most domains, marginal?

It’s a serious question. I told the room that this sits somewhat outside my direct expertise. I enjoy philosophy, though, and I offered a reflection.

The horse-and-car analogy is compelling, but it overlooks something important. The first cars were not autonomous systems. They were tools. They amplified human capability, but they did not replace human agency. They required a driver, direction, and intention. The displacement of the horse did not eliminate the human from the loop — it eliminated one tool in favor of another.

That distinction matters.

The boundary AI appears to keep encountering — despite extraordinary progress — is not performance alone. It is agency. Human intelligence is not just computational output; it is embedded in intention, continuity of experience, and intrinsic motivation. A human being can initiate action without being prompted. We care about outcomes. We define goals before solving them.

AI, as it currently exists, is extraordinarily powerful at synthesis. It recognizes patterns, recombines information, and generates coherent outputs at impressive scale. But it does so within externally defined objectives. It does not possess intrinsic goals. It does not decide what ought to matter.

Whether that gap will eventually close is an open question. But for now, the analogy between horses and humans breaks down at precisely that point: horses were a replaceable technology. Human agency is not merely a tool in the system — it is the system’s origin.

Someone could argue that if an AI has read every novel ever written, it would effectively “know” all possible human interactions. But that assumes that human experience is finite and fully captured in text. Even if a model internalized every story ever published, it would still be operating on representations of experience, not experience itself. More importantly, mastering patterns is not the same as originating intent. AI can recombine, extrapolate, and synthesize within an existing probability space but it does not generate its own aims. It does not decide what should matter. And that distinction between pattern mastery and goal formation is where the analogy with technological replacement begins to break down.

Bringing the conversation back to software development, this distinction matters in very practical terms. Building software is not just about generating functional code; it is about defining the right problem, aligning stakeholders with conflicting priorities, making trade-offs under uncertainty, and taking responsibility for outcomes. AI can accelerate implementation, suggest architectures, and even write large portions of a codebase. But it does not sit in a room negotiating scope between a founder and a product manager. It does not absorb the political, emotional, and strategic dimensions of a project. Software is a social process before it is a technical artifact — and as long as that remains true, human teams will remain central to its creation.

That conversation alone made the session worth the two hours.

A Reflection

I left the campus thinking about how rarely we get to have these kinds of slow, unhurried conversations about the industry we’re working in. At Outsourcify, the pace of client work doesn’t always create space for stepping back and asking the bigger questions. An afternoon in a university classroom, in front of curious students and sharp academics, is a useful corrective.

If you’re running a course, conference, or event and want someone to speak practically about AI in software development — from the perspective of an agency that builds with these tools every day — I’m genuinely happy to be in the conversation.

And to the professor who invited me: thank you. Four years from that architecture workshop to this — not a bad trajectory.


Outsourcify is a Bangkok-based web and application development agency with over ten years of experience delivering software for international organizations, startups, and enterprises worldwide. You can reach us at outsourcify.net.

Benoit Schneider · Managing Technical Director

After studying to become a Web Engineer at the UTBM in France, Benoit experienced working in various IT departments of large companies in Paris as a web developer then as a project manager before becoming a freelance web consultant in 2010, and finally co-founded Outsourcify in Thailand.

Have a project in mind?
Let's start your project today

Contact Us
Have a project in mind?
Let's start your project today

Related blog articles