Case Study
Enhancing Professional Conferences: The Role of AI Chatbots in Attendee Interaction and Logistics
A privately hosted professional conference attracting over 1,300 leaders across 8 countries was held in California early 2024. It featured a balanced mix of attendees, including 18% executives, 43% mid-career professionals, and 39% emerging leaders.
About:
Over three days, the conference sported over 200 sessions including keynotes, panels, and workshops on topics like innovation management, AI applications, and digital transformation, highlighting the disruptions and opportunities resulting from the latest technology on traditional enterprises. More than 70% of the content touches directly upon generative AI.
The Challenge: Addressing Complex Conference Logistics
Organizing and executing a large professional conference is a daunting task. The administrative and logistical overhead when managing a multinational audience can be quite significant. Attendees often inquire about the registration and check-in process, available travel accommodations, timetables and venue locations, relevant presentation materials, speaker networking contacts, and countless other requests that would be impossible to address manually given the limited support staff available.
The Solution
Partnering with GPT-trainer, the professional conference organizers integrated an entire suite of AI-chatbots into the conference website and iOS / Android app. The types of chatbots included general Q&A for on-demand distribution of logistical information, pretrained “AI digital doubles” of speakers that could answer technical questions pertaining to the session topic in real-time, and interest matching bots that could provide session / content recommendations based on the attendees’ own professional interests
Results
Number of Chatbot Conversations: Over the course of 3 days, the AI chatbots recorded an aggregate of 8,000+ conversation sessions.
Resolutions: The general Q&A chatbot boasted an impressive 79% resolution rate of all incoming queries. Responses were immediately generated after the users submitted their requests. The “AI digital doubles” were trained on speakerprovided domain-specific datasets and configured to be much more conservative in nature. They exhibited a large variance in resolution rates, but still averaged around 65%. Many speakers chose to respond to session attendees’ questions that the AI could not answer directly after the fact, using the built-in human handover feature.