Cover Image for Join Future of Human-Level AI Interaction: First AGI Week in Bay Area

Join Future of Human-Level AI Interaction: First AGI Week in Bay Area

Hosted by Eva Ngai, James Le & Ka Shing Chan
 
 
Registration
Past Event
Welcome! To join the event, please register below.
About Event

🌟 Invitation to the Future: Join the Future of Human-Level AI Interaction 🌟

​🌐 AGI Weekend🌐

📅 Nov 27th 9:30am-6:00pm

The Heyns Room at The Faculty Club at UC Berkeley, Minor Ln, Berkeley, CA 94720

***Scroll to the bottom for the detailed agenda***

1. 10:00 - 11:15 First-ever real-time human agent social simulation brought to life, Build-Your-Human-Agent Workshop and integrated in the dynamic Berkeley campus interface with Berkeley student-built social app Petkeley

2.11:15 - 18:00 Human Agent Multimodal Mini Hackathon

· powered by Unseen Identity Neuroscience Generative AI, and supported by Twelve Labs Video Understanding API and Petkeley, Berkeley student-built social app.

📧 Email with details about venue and detailed agenda will be sent to participants approved.

Participate in our hackathon on Devpost https://humanagentmultimodal.devpost.com/

and stay connected with the latest updates for the hackathon and ask any questions on our Telegram group https://t.me/+9rrBBRr42iIxMjRl.

📱 More Event Updates on LinkedIn and Twitter later in November:

A groundbreaking event aimed at bringing Real-Time Human-Level AI Interaction to a new dimension 🚀

  • 🤖 Connect with your very own AI Twin Human Agent. Instantly customized to your very own Generative AI mental model built within seconds.

  • 💬 Watch as your very own AI Twin Human Agent not only understands your words but also reasons with you, grasps your emotions within seconds, and transforms you into an expert across any knowledge domain, all in real-time.

Welcome to a new era where AI reasons with you, understands your emotions within seconds, and transforms you into an expert across any knowledge domain, all in real-time.

Step into the era of reasoning-linked emotionally attuned Human-AI interaction 🚀

🛸This breakthrough brings the generative agents social simulation concept to life, where users’ generative AI human agents act as their AI clones, generating scalable, personalized 1x1 interactions within seconds, dynamically mirroring users’ emotional and cognitive thinking traits, achieved through Unseen Identity innovative 30-second generative cognitive screening.

🌐Without requiring users’ info or prior prompt/instructions, watch as users’ human agents replicating genuine social interactions with instant personalized human behaviors and narratives, accelerating user communication and human experience simulation, by confirming highly individualized choices that deeply resonate with users’ individual needs.

Unseen Identity Neuroscience Generative AI has been building human agent cloning for cognitive thinking and emotion personas. Without requiring user info or prompt/instruction, in 30 seconds users can build their very own AI Twin Human Agent that immersively learns through interaction with the environment (multimodal input data) and helps the AI system acquire intuitive knowledge about user preferences. This enables online interactions, accelerating communication and task automation.

Twelve Labs Video Understanding API, complements this by bridging human agent behavior simulation with video understanding from images, extending the reach of simulated thinking and feeling in generating individualized responses. Harness the power of multimodal video understanding. Whether you have terabytes or petabytes of video, Twelve Labs can help you make sense of it all. It transforms all of that information into vector representations, enabling fast and scalable semantic search.

Petkeley: Berkeley student-built social app, featuring virtual pet roams a dynamic campus, connecting with others and syncing real-time events. Discover what kind of buddies you guys are most fit for. Connect and converse with friends here to deepen your relationships. Expect to encounter both familiar and new faces.

🔥Mini Hackathon Challenges 🔥 Teams can choose from the following challenges ​:

  1. Intuitive Prompt Refinement: Create a prompt revision augmentor tool that enhances intuitive multimedia content outputs based on human agents’ personas insight about the users.

  2. Background Action/Event Discovery: Build a human agent tool that can work in the background to search for relevant event recommendations on their interests and personas preferences.

  3. Image/Video Engagement: Create an AI model that can detect and summarize what people find most interesting in images or videos based on human agent’s emotion prediction for user individualized engagement.

  4. LinkedIn Expert Tracker/Insight Companion (or other social media): Create a human agent tool that enables human agents to track and update users on the latest multimedia insights shared by industry experts on platforms like LinkedIn.

  5. Personalized Learning Navigator: Build an AI assistant that customizes multimedia learning resources based on individual user’s human agent cognitive thinking preferences for adaptive learning.

  6. Narrative Prompt Generator: Design a tool that generates prompts for social simulation narratives, using user’s human agent personalized engagement preference for storytelling realism.

  7. Customized Visual Tags: Design a tool that generates personalized image and video tags based on user’s human agent emotion and reasoning preferences.

  8. Multimodal Communication accelerator: Develop a system that enables users to communicate faster with human agents through a combination of text, audio, and video.

  9. Emotion Prediction in Video: Build an Emotion Vision AI with human agents that can detect and predict individuals’ emotions in video content.

  10. Multimodal Storytelling: Develop a platform for users to create human agents that share and interact with multimedia narratives and stories.

🤖 Event Flow & Essential Timeline:

9:30 AM: Registration and networking kick-off.

10:00 AM – 11:15 AM:

Keynote PresentationsEva Ngai, Founder & CEO of Unseen Identity Neuroscience Generative AI and James Le, Head of Developer Experience, Twelve Labs

  • A talk on human agent social simulation and Unseen Identity API, and its recent profile integration into Petkeley social app built by Berkeley students

  • An intro on the importance of collaboration between Human Agent and Multimodal Data Understanding 

  • Invite James Le to Discuss the Twelve Labs API

  • Introduce Ideation Hackathon Challenges and Q&A Session

11: 15 AM - 12:00 PM

Team Formation Session, and Guest Talk by Matthew Murrie, The creator of Curiosity-Based Thinking, and the author of The Screaming Hairy Armadillo and The Book of What If...?

  • Human Agent provides guidance on forming initial connections and finding common interests – Industry/Non-technical and technical professionals share their interests and project ideas through their human agents

12:00 PM - 1:00 PM

Networking Session and Lunch Break - participants can take a break and have lunch on their own, beverages will be provided

1:00 PM – 3:30 PM

  • Teams working on project ideation and initial development of their projects

  • Human Agent provides brainstorming support, and Matthew facilitates the discussion with innovative thinking techniques

3:30 PM - 3:45 PM

🤖 Demonstrate Social Simulation Interaction of all Participants on Human Agent Unseen Identity Platform

  • Human Agents suggest participants with common interests to mingle, discuss projects further, and exchange contact information.

4:00PM - 5:15 PM

·        Project Submission at 3:45PM

·        Teams present their project ideas and initial development in a brief pitch format starting from 4PM

5:15 PM - 5:30 PM

·        Announcement of Top 3 Prizes. Closing Remarks.

5:30 PM - 6:30 PM

·        Networking Session

 

🔍 Before You Arrive:

  • Prep-Up: Explore the Unseen Identity API and the Twelve Labs Video Understanding API. Links and guides are available below for a deeper understanding.

 See you all on Nov 27th !

-        Eva Ngai

 

To get started with using the Twelve Labs Video Understanding API, you can follow this Quickstart Guide in the API Docs. Additionally, here are further links for you to peruse:

·        Community applications built with the API from previous hackathons: https://docs.twelvelabs.io/v1.2/docs/from-the-community

·        GitHub repository with sample notebooks that showcase how to use the API: https://github.com/twelvelabs-io/23labs-hackathon-playbook/

In order to get access to the new Generate API powered by Twelve Labs' new Pegasus-1 video-to-text foundation model, you need to fill out this Typeform: https://twelvelabs.typeform.com/to/cA0qthmi. For the second question in the form asking why you're interested, please write "Human Agent Multimodal Mini Hackathon."

In order to get support from Twelve Labs team members during the hackathon, please join the Discord community: https://discord.gg/NbU8mGq4.

 

Unseen Identity Human Agent: Elevate Your Experience with 30-Second Cognitive Screening and Personalized Interactions!

·        Unlock the power of API access for a rapid 30-second cognitive screening with Unseen Human Agent feature.

·        Elevate your experience with personalized interactions using our distinctive user model for unparalleled customization in everything you do!

·        Get the early access (Available on 26th Nov, 2023) https://unseenidentity.xyz/api-access-for-unseen-30-seconds-screening/