Skip to main content

Overview

Sphinx works with any Jupyter kernel that VS Code can connect to. This means you can use Sphinx with cloud notebooks, remote servers, and enterprise data platforms—not just local Python environments.
Sphinx requires the Microsoft Jupyter extension (ms-toolsai.jupyter), which is automatically installed as a dependency when you install Sphinx.

How It Works

Sphinx uses VS Code’s standardized notebook and kernel APIs provided by the Jupyter extension. When you connect VS Code to a remote kernel or cloud notebook environment, Sphinx automatically works with that connection—no additional configuration required.

Supported Backends

Databricks

Connect Sphinx to Databricks notebooks and clusters for enterprise data workflows.

Databricks Extension

Official Databricks extension for VS Code with notebook and cluster support.
Installation:
  1. Install the Databricks extension from the VS Code Marketplace
  2. Configure your Databricks workspace connection
  3. Open or create a notebook in your Databricks workspace
  4. Select a Databricks cluster as your kernel
  5. Use Sphinx with Cmd+T / Ctrl+T
Resources:
For Databricks data connections, you can also use Sphinx’s integration credentials to securely access Databricks data from any kernel.

Google Colab

Run Sphinx with Google Colab notebooks for free GPU/TPU access.

Colab Extension

Connect VS Code to Google Colab runtimes.
Installation:
  1. Install the Google Colab extension
  2. Sign in with your Google account
  3. Connect to a Colab runtime (GPU/TPU available)
  4. Open your .ipynb file
  5. Use Sphinx
Resources:

BigQuery Notebooks (Google Cloud)

Use Sphinx with BigQuery notebooks for large-scale data analysis.

Google Cloud Code

Google Cloud extension with BigQuery notebook support.
Installation:
  1. Install Google Cloud Code
  2. Authenticate with your Google Cloud account
  3. Install the BigFrames library:
    pip install bigframes
    
  4. Use Sphinx to query and analyze data
Resources:

Azure Machine Learning

Connect to Azure ML compute instances and managed notebooks.

Azure Machine Learning

Official Azure ML extension for VS Code.
Installation:
  1. Install the Azure Machine Learning extension
  2. Sign in to your Azure account
  3. Connect to your Azure ML workspace
  4. Create or attach to a compute instance
  5. Open notebooks and use Sphinx
Resources:

Amazon SageMaker

Use Sphinx with SageMaker notebook instances and Studio.

AWS Toolkit

AWS Toolkit with SageMaker integration.
Resources:

JupyterHub / Remote Jupyter Servers

Connect to any JupyterHub deployment or remote Jupyter server.

Jupyter Extension

Built-in support for remote Jupyter connections.
Connect to a Remote Server:
  1. Open VS Code Command Palette (Cmd+Shift+P / Ctrl+Shift+P)
  2. Run “Jupyter: Specify Jupyter Server for Connections”
  3. Select “Existing” and enter your server URL
  4. Authenticate if required (token or password)
  5. Your remote kernels appear in the kernel picker
Connect to JupyterHub:
  1. Get your JupyterHub server URL (e.g., https://hub.example.com)
  2. Use the same connection flow as remote servers
  3. Authenticate with your JupyterHub credentials
  4. Access your hub-managed kernels
Resources:

Conda Environments

Use Sphinx with different Conda environments on your local machine. Setup:
  1. Ensure the Python extension is installed
  2. Create or activate a Conda environment with Jupyter:
    conda create -n myenv python=3.11 ipykernel
    conda activate myenv
    python -m ipykernel install --user --name myenv
    
  3. Open a notebook in VS Code
  4. Select your Conda environment from the kernel picker
  5. Use Sphinx with your Conda kernel

Open VSX Registry

If you’re using an open-source VS Code distribution or fork (Cursor, Windsurf, etc.) that uses the Open VSX Registry instead of the Microsoft Marketplace, you can find compatible extensions there:
Some proprietary extensions (Databricks, Azure ML, AWS Toolkit) may not be available on Open VSX. Check each extension’s licensing and availability.

Troubleshooting

  1. Ensure the kernel is running and connected in VS Code
  2. Try executing a cell manually first to activate the kernel
  3. Check that the Jupyter extension shows the kernel in the status bar
  4. Restart VS Code if the kernel was recently connected
Remote kernels may disconnect due to network issues or idle timeouts. Try:
  • Increasing your server’s idle timeout settings
  • Using a more stable network connection
  • Checking your cloud provider’s kernel persistence settings
Sphinx streams data between your machine and the remote kernel. For better performance:
  • Use kernels geographically close to you
  • Consider using the remote kernel for heavy computation only
  • Keep datasets on the remote machine when possible
Sphinx requires the Microsoft Jupyter extension API. If you’re using an alternative VS Code distribution:
  1. Ensure ms-toolsai.jupyter is installed and working
  2. Check that the Jupyter extension version is 2024.1.0 or later
  3. Some forks may have compatibility issues—try the official VS Code if problems persist

Best Practices

1

Verify kernel connection first

Before using Sphinx, run a simple cell (like print("hello")) to ensure your kernel is connected and responsive.
2

Keep notebooks saved

Remote connections can be unstable. Enable VS Code’s autosave and Sphinx’s autosave feature to prevent data loss.
3

Use appropriate compute

Match your compute resources to your task. Use GPU kernels for deep learning, high-memory instances for large datasets.
4

Manage credentials securely

Use Sphinx’s Secrets and Integrations features to manage API keys and database credentials instead of hardcoding them in notebooks.