One major challenge is the inference time of the models. While models like ChatGPT have improved in speed, they still take time to process information. When dealing with a large number of agents, there can be significant latency in real-time interactions. Optimizations and fine-tuning will be necessary to make the models faster and more efficient. ... See more
Actually, this is disputed, costs of API calls are crucial at this stage, due to simple, 1-sentence action roughly requires 40~75k tokens burn.
Finding the right balance between autonomy and control depends on the specific application space. Park says one of the main challenges with generative agents is that they need a very clear objective function.