← Back to Table of Contents

Chapter 12: Performance and Optimization

You've now seen how to solve complex problems with Computo and Permuto. As you move from writing small scripts to building business-critical data pipelines, performance becomes an important consideration.

The Computo engine is designed to be efficient, but the structure of your script can have a significant impact on its execution speed and memory usage. This chapter provides guidance on writing high-performance transformations and understanding the trade-offs involved.

The Golden Rule: let is Your Best Friend

The single most important optimization technique in Computo is the proper use of the let operator.

Anti-Pattern: Re-evaluating Expressions Consider this script that uses the same get expression multiple times:

["obj",
  ["name", ["get", ["get", ["$input"], "/user"], "/profile"], "/name"]],
  ["email", ["get", ["get", ["$input"], "/user"], "/profile"], "/email"]]
]

The expression ["get", ["$input"], "/user"] is evaluated twice. While this is a small example, in a complex script with many nested operators, this redundant work can add up.

Optimized Pattern: Bind Once, Use Many Times By binding the result of an expensive or frequently used expression to a variable, you ensure it is evaluated only once.

["let",
  [
    ["user_profile", ["get", ["get", ["$input"], "/user"], "/profile"]]
  ],
  ["obj",
    ["name", ["get", ["$", "/user_profile"], "/name"]],
    ["email", ["get", ["$", "/user_profile"], "/email"]]
  ]
]

This version is not only faster and more memory-efficient, but it's also significantly more readable. When in doubt, use let to store the result of any non-trivial expression that you plan to use more than once.

Understanding Lazy Evaluation

Computo's if operator is "lazy." This means it only evaluates the branch that is actually chosen. The other branch is never touched. This has important performance implications.

Inefficient: Evaluating Before the if

["let",
  [
    ["premium_dashboard", <... very expensive expression to build a dashboard ...>],
    ["basic_dashboard", <... another expensive expression ...>]
  ],
  ["if",
    ["get", ["$input"], "/user/is_premium"],
    ["$", "/premium_dashboard"],
    ["$", "/basic_dashboard"]
  ]
]

In this script, both the premium and basic dashboards are fully computed and stored in variables, even though only one will ever be used.

Efficient: Evaluating Inside the if By moving the expensive expressions inside the if branches, you ensure that only the necessary work is done.

["if",
  ["get", ["$input"], "/user/is_premium"],
  <... very expensive expression to build a dashboard ...>,
  <... another expensive expression ...>
]

This is a critical pattern for performance. Defer expensive computations by placing them inside the branches of an if statement whenever possible.

Operator Performance Characteristics

Not all operators are created equal. Here is a general guide to their relative performance cost:

Pipeline Ordering Matters

The order of your chained array operations can have a massive impact on performance. The key principle is to reduce the size of your dataset as early as possible.

Anti-Pattern: map before filter Imagine you need to get the names of all active users.

["map",
  ["filter",
    ["map",
      ["get", ["$input"], "/users"],
      ["lambda", ["u"], ["obj", ["name", ["get",...]], ["active", ["get",...]]]]
    ],
    ["lambda", ["u"], ["get", ["$", "/u"], "/active"]]
  ],
  ["lambda", ["u"], ["get", ["$", "/u"], "/name"]]
]

This is highly inefficient. If you have 10,000 users, the first map operation will create 10,000 new, temporary objects in memory. Then, filter will iterate over those 10,000 new objects and likely discard most of them.

Optimized Pattern: filter before map

["map",
  ["filter",
    ["get", ["$input"], "/users"],
    ["lambda", ["u"], ["get", ["$", "/u"], "/active"]]
  ],
  ["lambda", ["u"], ["get", ["$", "/u"], "/name"]]
]

This version is dramatically better. The filter operator runs first on the raw input. If only 100 users are active, the subsequent map only has to do work on those 100 items. It creates far fewer temporary objects and performs fewer iterations.

Always filter as early as you can to reduce the amount of data that later stages in your pipeline need to process.

Tail Call Optimization: No Stack Overflow Worries

Computo implements tail call optimization (TCO), which means you don't need to worry about stack overflow errors from deeply nested control flow structures. This makes Computo suitable for complex, programmatically-generated transformations.

What This Means for You

Traditional recursive programs can cause stack overflow if they nest too deeply. Computo eliminates this concern:

// This would cause stack overflow in many languages, but not in Computo
["if", ["count", ["$input"]], 
  ["if", [">", ["get", ["$input"], "/score"], 90],
    ["if", ["==", ["get", ["$input"], "/status"], "active"],
      ["if", ["!=", ["get", ["$input"], "/role"], "admin"],
        "Process user",
        "Skip admin"
      ],
      "Inactive user"
    ],
    "Low score"
  ],
  "Empty input"
]

Practical Benefits

  1. Complex conditional trees: You can nest if statements arbitrarily deep without performance degradation
  2. Deep let scoping: Nested variable scopes don't consume stack space
  3. Programmatic generation: Generated scripts with deep nesting work reliably
  4. No configuration needed: TCO is automatic and always enabled

This optimization makes Computo robust for complex business logic and machine-generated transformations where nesting depth might be unpredictable.

In This Chapter

You've learned the core principles of writing high-performance Computo scripts: * Use let to avoid re-evaluating expressions. * Leverage the lazy evaluation of the if operator to defer expensive work. * Understand the relative performance costs of different operators. * Structure your array pipelines to filter early and map late. * Take advantage of tail call optimization for deeply nested control flow without stack overflow concerns.

By applying these principles, you can ensure that your transformations are not only correct but also fast and efficient enough for production workloads. The final chapters will cover error handling and best practices to round out your expertise.

← Back to Table of Contents