Skip to main content

Documentation Index

Fetch the complete documentation index at: https://scalarfield.io/docs/llms.txt

Use this file to discover all available pages before exploring further.

Code Execution in Chat

When you run code in chat, each cell executes in a secure, isolated environment with dedicated resources. Your variables, imports, and data persist across cells within the same conversation. The resources available to you depend on your plan:
FreeProUltra
Max run time2 minutes10 minutes15 minutes
Memory1 GB2 GB2 GB
CPU2 cores2 cores2 cores

What happens when you hit a limit

  • Time limit: The cell is stopped and you see a timeout error. If you regularly hit the limit, consider upgrading your plan for longer-running computations.
  • Memory limit: The cell is stopped and you see a memory error. Try reducing data size or processing in smaller batches.
  • Kernel restart: Restarting the kernel clears all variables, imports, and state. You can re-run previous cells to restore your session.

Trading Strategies and Monitors

Each scheduled run of a strategy or monitor executes in a fresh, isolated environment. The same limits apply regardless of your plan:
ResourceLimit
Max run time3 minutes
Memory512 MB
CPU2 cores

How scheduled runs work

  • Fresh environment: Each run starts with a clean slate — no state is carried over from previous runs. All imports, data fetches, and variable definitions must be included in your code.
  • One at a time: Only one run per strategy or monitor can execute at a time. If the previous run is still in progress when the next one is scheduled, it is skipped.
  • Failed runs: If a run exceeds the time or memory limit, it is marked as failed in the execution results panel.
  • Credit cost: Each run consumes 0.01 credits per second of run time. See Pricing & Credits for details.

Tips

  • Break work into steps. In chat, split long data processing across multiple code cells rather than running everything in one cell. For strategies and monitors, fetch only the data you need.
  • Process large datasets incrementally. Use filtering, pagination, or chunked processing instead of loading everything into memory at once.
  • Monitor execution times. Check the execution results panel for your strategies and monitors to see how long each run takes and how close you are to the limit.