Amazon Sagemaker
Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. With Amazon SageMaker, all the barriers and complexity that typically slow down developers who want to use machine learning are removed. The service includes models that can be used together or independently to build, train, and deploy your machine learning models.

Medical LLM - Medium Free trial
By:
Latest Version:
5.4.7
Use for chat, RAG, medical summarization, open-book question answering with context of up to 32K tokens.
Product Overview
Trained on diverse medical texts, this model excels in summarizing, answering complex clinical questions, and transforming clinical notes, patient encounters, and medical reports into concise summaries. Its question-answering capability ensures context-specific responses, enhancing decision-making. For physicians, this tool offers a quick grasp of a patient’s history, aiding timely decisions. Optimized for Retrieval-Augmented Generation (RAG), the model integrates with healthcare databases, EHRs, and PubMed to boost response quality. For enhanced patient care, we offer clinical de-identification for secure data processing, medical speech-to-text for accurate transcriptions, and a medical chatbot to facilitate patient interaction.
Key Data
Version
Type
Model Package
Highlights
Benchmarking Results:
- Achieves 86.31% average on OpenMed benchmarks, surpassing GPT-4 (82.85%) and Med-PaLM-2 (84.08%)
- Performance in medical genetics: 95%; performance in professional medicine: 94.85%
- Clinical knowledge comprehension 89.81% and college biology mastery 93.75%
- Achieves 58.9% average on standard LLM benchmarks
- Balance of specialized medical knowledge and broad language understanding, demonstrated by 70.93% on GPT4All benchmark
- Achieves 75.54% performance in medical MCQAs and 79.4% on PubMedQA
Real-Time Inference
- Instance Type: ml.p4d.24xlarge
- Maximum supported context length for this instance type: 32k
- Tokens per Second during real-time inference:
- QA: up to 550 tokens per second
- Summarization: up to 130 tokens per second
- Instance Type: ml.p5.48xlarge
- Maximum supported context length for this instance type: 32k
- Tokens per Second during real-time inference:
- QA: up to 1028 tokens per second
- Summarization: up to 230 tokens per second
Video materials:
Not quite sure what you’re looking for? AWS Marketplace can help you find the right solution for your use case. Contact us
Pricing Information
Use this tool to estimate the software and infrastructure costs based your configuration choices. Your usage and costs might be different from this estimate. They will be reflected on your monthly AWS billing reports.
Contact us to request contract pricing for this product.
Estimating your costs
Choose your region and launch option to see the pricing details. Then, modify the estimated price by choosing different instance types.
Version
Region
Software Pricing
Model Realtime Inference$19.96/hr
running on ml.p4d.24xlarge
Model Batch Transform$19.96/hr
running on ml.g5.48xlarge
Infrastructure PricingWith Amazon SageMaker, you pay only for what you use. Training and inference is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and inference instances.
Learn more about SageMaker pricing
With Amazon SageMaker, you pay only for what you use. Training and inference is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and inference instances.
Learn more about SageMaker pricing
SageMaker Realtime Inference$37.688/host/hr
running on ml.p4d.24xlarge
SageMaker Batch Transform$20.36/host/hr
running on ml.g5.48xlarge
About Free trial
Try this product for 15 days. There will be no software charges, but AWS infrastructure charges still apply. Free Trials will automatically convert to a paid subscription upon expiration.
Model Realtime Inference
For model deployment as Real-time endpoint in Amazon SageMaker, the software is priced based on hourly pricing that can vary by instance type. Additional infrastructure cost, taxes or fees may apply.InstanceType | Realtime Inference/hr | |
---|---|---|
ml.p4d.24xlarge Vendor Recommended | $19.96 | |
ml.p5.48xlarge | $19.96 |
Usage Information
Model input and output details
Input
Summary
Input Format
1. Chat Completion
Example Payload
{
"model": "/opt/ml/model",
"messages": [
{"role": "system", "content": "You are a helpful medical assistant."},
{"role": "user", "content": "What should I do if I have a fever and body aches?"}
],
"max_tokens": 1024,
"temperature": 0.7
}
2. Text Completion
Single Prompt Example
{
"model": "/opt/ml/model",
"prompt": "How can I maintain good kidney health?",
"max_tokens": 512,
"temperature": 0.6
}
Multiple Prompts Example
{
"model": "/opt/ml/model",
"prompt": [
"How can I maintain good kidney health?",
"What are the best practices for kidney care?"
],
"max_tokens": 512,
"temperature": 0.6
}
Important Notes:
- Streaming Responses: Add "stream": true to your request payload to enable streaming
- Model Path Requirement: Always set "model": "/opt/ml/model" (SageMaker's fixed model location)
For addistional details check the documentation here
Input MIME type
application/jsonSample input data
Output
Summary
Output Format
The API delivers responses in two modes:
- Non-streaming:
The complete response is returned as a single JSON object once the model finishes generating the output. This occurs when "stream": false (default) is set in the request payload.
- Streaming:
The response is delivered incrementally as JSON Lines (JSONL) chunks, each prefixed with data: and ending with a newline. The stream concludes with data: [DONE]. This mode is activated by setting "stream": true in the request payload. This section details the structure and fields of the output for both chat completion and text completion endpoints in each mode, reflecting the behavior of a model hosted on Amazon SageMaker with the fixed path "/opt/ml/model".
Non-Streaming Responses
1. Chat Completion
Description:
The chat completion response contains the model’s reply to a series of input messages (e.g., from "system" and "user" roles), as shown in the user’s example payload.
Example:
{ "id": "chatcmpl-1d202501a96e4580b6352ba7064e6bb8", "object": "chat.completion", "created": 1743488701, "model": "/opt/ml/model", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "The patient presents with symptoms of a ...", "tool_calls": [] }, "logprobs": null, "finish_reason": "stop", "stop_reason": null, } ], "usage": { "prompt_tokens": 206, "completion_tokens": 356, "total_tokens": 562, "prompt_tokens_details": null }, "prompt_logprobs": null }
2. Text Completion
Description:
The text completion response contains the model’s generated text based on a single prompt or an array of prompts, as shown in the user’s single and multiple prompt examples.
Example (Single Prompt):
{ "id": "cmpl-a6d9952b95dc4c0dbea4cf9deeb46560", "object": "text_completion", "created": 1743488720, "model": "/opt/ml/model", "choices": [ { "index": 0, "text": "If you have a fever and body aches ...", "logprobs": null, "finish_reason": "stop", "stop_reason": null, "prompt_logprobs": null } ], "usage": { "prompt_tokens": 14, "completion_tokens": 368, "total_tokens": 382, "prompt_tokens_details": null } }
Example (Multiple Prompts):
{ "id": "cmpl-86c6f7fe2ead4dc79ba5942eecfb9930", "object": "text_completion", "created": 1743489812, "model": "/opt/ml/model", "choices": [ { "index": 0, "text": "To maintain good kidney health ...", "logprobs": null, "finish_reason": "stop", "stop_reason": null, "prompt_logprobs": null }, { "index": 1, "text": "Best practices for kidney care include ...", "logprobs": null, "finish_reason": "stop", "stop_reason": null, "prompt_logprobs": null } ], "usage": { "prompt_tokens": 20, "completion_tokens": 50, "total_tokens": 70, "prompt_tokens_details": null } }
Streaming Responses
In streaming mode ("stream": true), the API delivers the response as a series of JSON Lines (JSONL) chunks, each prefixed with data: and terminated with a newline. The stream ends with data: [DONE]. This aligns with the user’s streaming examples using invoke_streaming_endpoint.
1. Chat Completion (Streaming)
Description:
Each chunk contains a portion of the assistant’s message. The full response is reconstructed by concatenating the content fields from the delta objects in the order received.
Example
data: {"id":"chatcmpl-5a398898be0b4014b7eb9fb15798a006","object":"chat.completion.chunk","created":1743433744,"model":"/opt/ml/model","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null,"stop_reason":null}]} data: {"id":"chatcmpl-5a398898be0b4014b7eb9fb15798a006","object":"chat.completion.chunk","created":1743433744,"model":"/opt/ml/model","choices":[{"index":0,"delta":{"content":"If"},"logprobs":null,"finish_reason":null,"stop_reason":null}]} data: {"id":"chatcmpl-5a398898be0b4014b7eb9fb15798a006","object":"chat.completion.chunk","created":1743433744,"model":"/opt/ml/model","choices":[{"index":0,"delta":{"content":" you"},"logprobs":null,"finish_reason":null,"stop_reason":null}]} data: {"id":"chatcmpl-5a398898be0b4014b7eb9fb15798a006","object":"chat.completion.chunk","created":1743433744,"model":"/opt/ml/model","choices":[{"index":0,"delta":{"content":" have"},"logprobs":null,"finish_reason":"length","stop_reason":null}]} data: [DONE]
2. Text Completion (Streaming)
Description:
Each chunk contains a portion of the generated text. The full response is reconstructed by concatenating the text fields from each chunk in the order received.
Example:
data: {"id":"cmpl-1318a788635e47a58bafeaf18a2816c2","object":"text_completion","created":1743433786,"model":"/opt/ml/model","choices":[{"index":0,"text":"If","logprobs":null,"finish_reason":null,"stop_reason":null}],"usage":null} data: {"id":"cmpl-1318a788635e47a58bafeaf18a2816c2","object":"text_completion","created":1743433786,"model":"/opt/ml/model","choices":[{"index":0,"text":" you","logprobs":null,"finish_reason":null,"stop_reason":null}],"usage":null} data: {"id":"cmpl-1318a788635e47a58bafeaf18a2816c2","object":"text_completion","created":1743433786,"model":"/opt/ml/model","choices":[{"index":0,"text":" have","logprobs":null,"finish_reason":null,"stop_reason":null}],"usage":null} data: {"id":"cmpl-1318a788635e47a58bafeaf18a2816c2","object":"text_completion","created":1743433786,"model":"/opt/ml/model","choices":[{"index":0,"text":" a","logprobs":null,"finish_reason":"stop","stop_reason":null}],"usage":null} data: [DONE]
Output MIME type
application/json, text/event-streamSample output data
Sample notebook
End User License Agreement
By subscribing to this product you agree to terms and conditions outlined in the product End user License Agreement (EULA)
Support Information
Medical LLM - Medium
For any assistance, please reach out to support@johnsnowlabs.com. https://spark-nlp.slack.com/archives/C06HG18DDDH
AWS Infrastructure
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Learn MoreRefund Policy
No refunds are possible.
Customer Reviews
There are currently no reviews for this product.
View allWrite a review
Share your thoughts about this product.
Write a customer review