
Managed foundation model platform for building generative AI applications.
4 months ago
If you are already on AWS, Bedrock is a no-brainer. No need to manage separate API keys or billing with OpenAI. The ability to switch between Llama 3 and Claude with just a config change gives us great flexibility. We built an internal chatbot in a week using Bedrock Agents.
10 months ago
We chose Bedrock because it keeps all our data within our existing VPC. Accessing models like Claude 3.5 Sonnet via a simple API call without managing infrastructure is a massive win. The 'Knowledge Bases' feature for RAG (Retrieval-Augmented Generation) was surprisingly easy to set up. It's much cleaner than managing raw EC2 instances for inference.
about 1 year ago
Bedrock is powerful, but getting our quotas raised for production traffic was a nightmare. We kept hitting throughput limits during testing. Also, the console UI is a bit clunky compared to OpenAI's playground. Once it's running via SDK it's fine, but the operational friction is real for enterprise scale.