Code Interpreter

Use Code Interpreter to let the model write and run Python code in a secure, isolated container. This tool is useful for calculations, data analysis, file processing, and other computation-heavy tasks.

Note

The OCI code interpreter tool uses the same format as the OpenAI code interpreter tool used with the Responses API with the OCI OpenAI-compatible endpoint. For syntax and request details, see the Code Interpreter topic in OpenAI documentation.
Tip

In prompts, reference the Code Interpreter as the python tool. For example: Use the python tool to solve the problem.

Because code runs in an isolated environment with no external network access, Code Interpreter is a good option when the workflow needs computation or file processing in a controlled setting.

Using the Code Intepreter

If you're using this for the first time, think of Code Interpreter as a temporary Python workspace for the model.

You can use it for tasks such as:

  • Solving math problems
  • Analyzing uploaded files
  • Cleaning or transforming data
  • Creating charts or tables
  • Generating output files such as logs or processed datasets

Execution Environment

The Python environment includes more than 420 preinstalled libraries, including Pandas, Matplotlib, and SciPy, so many common tasks work without extra setup.

The code runs inside a sandbox container. This container is the working environment where Python runs and where uploaded files, generated files, and temporary working data are stored during the session.

Container Memory Limits

Code Interpreter containers use a shared memory pool of 64 GB per tenancy.

Supported container sizes are:

  • 1 GB
  • 4 GB
  • 16 GB
  • 64 GB

This shared limit can be divided across multiple containers. For example, it can support:

  • Sixty-four 1 GB containers
  • Sixteen 4 GB containers
  • Four 16 GB containers
  • One 64 GB container

If you need more capacity, you can submit a service request.

Container Expiration

A container expires after 20 minutes of inactivity.

This is important to know when building multi-step flows:

  • An expired container can't be reused
  • You must create a new container
  • Files must be uploaded again if needed
  • In-memory state, such as Python variables and Python objects, is lost

Because of this, it's best to treat containers as temporary working environments.

Example

To use Code Interpreter, add a tool definition in the tools property with "type": "code_interpreter".

response = client.responses.create(
    model="xai.grok-4-1-fast-reasoning",
    tools=[
        {
            "type": "code_interpreter",
            "container": {"type": "auto"}
        }
    ],
    instructions="Use the python tool to solve the problem and explain the result.",
    input="Find the value of (18 / 3) + 7 * 2."
)

print(response.output_text)

In this example, the python tool is Code Interpreter.

Containers for Code Interpreter

The Code Interpreter tool requires a container object. The container is the isolated environment where the model runs Python code.

A container can hold:

  • Uploaded files
  • Files created by the model
  • Temporary working data during execution

When you use Code Interpreter, you can use one of two container modes:

  • auto: OCI Generative AI provisions or reuses a container in the current context.
  • container OCID: You create the container yourself, define the memory size, and provide the OCID.

For both options, the containers are created and managed in OCI Generative AI. The code that runs in those containers also runs in the OCI Generative AI tenancy.

Note

The OCI code interpreter tool uses the same format as the OpenAI code interpreter tool used with the Responses API with the OCI OpenAI-compatible endpoint. For syntax and request details, see Containers for Code Intepreters and OpenAI Containers API documentation.

Auto Mode

In auto mode, the service provisions or reuses the container for you. This is the easiest option and a good starting point for most users.

Use auto mode when:

  • You want OCI Generative AI to manage the container
  • You don't need direct control over the environment
  • You want a simpler setup

You can optionally specify memory_limit. If you don't specify it, the default is 1 GB.

response = client.responses.create(
    model="xai.grok-code-fast-1",
    tools=[{
        "type": "code_interpreter",
        "container": {
            "type": "auto",
            "memory_limit": "1g"
        }
    }],
    input="Use the python tool to calculate the mean, median, and standard deviation of 12, 18, 24, 30, and 42."
)

Explicit Mode

In explicit mode, you create the container first and set the size. Then you pass the container ID in the request.

Use explicit mode when you want more control over container settings, such as memory size, or when you want to maintain a dedicated session.

container = client.containers.create(name="test-container", memory_limit="4g")

response = client.responses.create(
    model="xai.grok-code-fast-1",
    tools=[{
        "type": "code_interpreter",
        "container": container.id
    }],
    tool_choice="required",
    input="Use the python tool to calculate the mean, median, and standard deviation of 12, 18, 24, 30, and 42."
)

print(response.output_text)

Files in Code Interpreter

Code Interpreter supports dynamic file interaction during the life of the container. The model can read files you provide and can also create new files.

This is useful for workflows such as:

  • Reading a CSV or PDF
  • Generating a chart
  • Saving processed output
  • Creating logs or reports

API Reference: OpenAI Container Files API documentation and Container Files API.

File Persistence

Files created or updated by the python tool persist across code execution turns in the same container, as long as the container hasn't expired.

This means the model can build on earlier work in the same session. For example, it can:

  • Read a file
  • Analyze it
  • Save a chart
  • Use that chart later in the same container

When the container expires, that state is no longer available.

Uploading and Managing Files

You can manage container files through the Container Files APIs.

Common operations include:

  • Create Container File: Add a file to the container, either by multipart upload or by referencing an existing /v1/files ID
  • List Container Files: View files in the container
  • Delete Container File: Remove a file
  • Retrieve Container File Content: Download a file from the container

This lets you use the container as a temporary workspace for model-driven code execution.

Output Files & Citations

When the model creates files, those files are stored in the container and can be returned as annotations in the response.

These annotations can include:

  • container_id
  • file_id
  • filename

You can use these values to retrieve the generated file content.

The response can include a container_file_citation object that identifies the generated file. Use the Retrieve Container File Content operation to download the file.

The OCI Responses API supports OpenAI-compatible endpoints for features such as Responses, Files, Containers, and Container Files. You can use the related OpenAI documentation as a reference for request structure, response formats, and general workflows. When using these APIs with OCI, send requests to the OCI Generative AI inference endpoints, use OCI authentication, and note that the resources and execution remain in OCI Generative AI, not in an OpenAI tenancy.

Note

The OCI Files API uses the same format as the OpenAI Files API with the OCI OpenAI-compatible endpoint. For syntax and request details, see the OpenAI Files API documentation.
Note

The OCI code interpreter that's used as a tool for the Ressponses API uses the same format as the OpenAI code interpreter with the OCI OpenAI-compatible endpoint. For syntax and request see the following references: