SageMaker Canvas provides access to ready-to-use foundation model (FMs) for content generation, text extraction, and text summarization. You can access FMs such as Claude 2, Amazon Titan, and Jurassic-2 (powered by Amazon Bedrock) as well as publicly available FMs such as Falcon and MPT (powered by SageMaker JumpStart) tune them using your own data.
SageMaker Canvas provides access to ready to use tabular, NLP, and CV models for use cases including sentiment analysis, object detection in images, text detection in images, and entities extraction. The ready-to-use models do not require model building, and are powered by AWS AI services, including Amazon Rekognition, Amazon Textract, and Amazon Comprehend.
After you’ve built your model, you can evaluate how well your model performs before deploying it to production using company data. You can easily compare model responses and select the best response for your needs.
SageMaker Canvas offers visual what-if-analysis so you can change model inputs and then understand how the changes impact individual predictions. You can create automated batch predictions for an entire dataset, and, when the dataset is updated, you ML model is automatically updated. After the ML model is updated, you can review the updated predictions from the SageMaker Canvas no-code interface.
Share model predictions with Amazon QuickSight to build dashboards that combine traditional business intelligence and predictive data in the same interactive visual. In addition, SageMaker Canvas models can be shared and integrated directly in QuickSight, allowing analysts to generate highly accurate predictions for new data within a QuickSight dashboard.
You can register ML models created in SageMaker Canvas to the SageMaker Model Registry with a single click in order to integrate the model into existing model deployment CI/CD processes.
You can share your SageMaker Canvas models with data scientists who use SageMaker Studio. Then data scientists can review, update, and share updated models with you or deploy your model for inference.