Burning Questions

  1. What challenges and best practices have emerged from deploying LLMs to large-scale user bases? If you have deployed LLM from a major vendor like Google, Microsoft (OpenAI), and Amazon (Antropic) to thousands of users, I want to talk to you.

  2. To what extent will compute supply constrain the growth of generative AI? When will inference supply (tokens per second) supply catch up with demand?

  3. Which AI liability cases should we monitor, and what are the pivotal legal questions at stake? For example, if a corporation uses a cloud AI service to hire people, and the AI is proven to have had bias, is the corporation or cloud AI service liable, both, or neither?

Share