HomeSample Page

Sample Page Title


On this tutorial, we construct a sophisticated Tree-of-Ideas (ToT) multi-branch reasoning agent from scratch. As an alternative of counting on linear chain-of-thought reasoning, we design a system that generates a number of reasoning branches, scores every department utilizing a heuristic analysis perform, prunes weak candidates, and continues increasing solely the strongest paths. We mix an instruction-tuned transformer mannequin with a customized tree construction and implement beam-search type choice with depth-limited search. By grounding the system within the 24-game area, we create a transparent, goal benchmark for reasoning the place we will observe department growth, pruning, scoring, and purpose detection in motion.

!pip -q set up -U transformers speed up sentencepiece


import re
import math
from dataclasses import dataclass, area
from typing import Record, Elective, Tuple, Dict, Any


import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM


MODEL_NAME = "google/flan-t5-base"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
mannequin = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME)


machine = "cuda" if torch.cuda.is_available() else "cpu"
mannequin = mannequin.to(machine)


print("Gadget:", machine)
print("Mannequin loaded:", MODEL_NAME)


@dataclass
class Node:
   depth: int
   numbers: Record[float]
   exprs: Record[str]
   thought: str = ""
   rating: float = -1e9
   is_goal: bool = False
   father or mother: Elective["Node"] = None
   meta: Dict[str, Any] = area(default_factory=dict)


def pretty_state(nums: Record[float], exprs: Record[str]) -> str:
   pairs = [f"{e}={n:g}" for e, n in zip(exprs, nums)]
   return " | ".be a part of(pairs)

We set up the required libraries and cargo the FLAN-T5 mannequin utilizing the right Seq2Seq structure. We outline our core Node information construction that represents every reasoning state within the Tree-of-Ideas search. We additionally initialize the machine configuration and helper utilities that allow us to obviously print and examine the reasoning state.

OPS = ["+", "-", "*", "/"]


def safe_apply(a: float, b: float, op: str) -> Elective[float]:
   if op == "+": return a + b
   if op == "-": return a - b
   if op == "*": return a * b
   if op == "/":
       if abs(b) < 1e-12:
           return None
       return a / b
   return None


def combine_expr(ea: str, eb: str, op: str) -> str:
   return f"({ea} {op} {eb})"


def is_24(x: float, tol: float = 1e-6) -> bool:
   return abs(x - 24.0) <= tol


def one_step_closeness(nums: Record[float]) -> float:
   if len(nums) == 1:
       return abs(nums[0] - 24.0)
   greatest = float("inf")
   n = len(nums)
   for i in vary(n):
       for j in vary(n):
           if i == j:
               proceed
           a, b = nums[i], nums[j]
           for op in OPS:
               r = safe_apply(a, b, op)
               if r is None:
                   proceed
               greatest = min(greatest, abs(r - 24.0))
   return greatest if greatest != float("inf") else 1e9


def heuristic_score(node: Node) -> float:
   nums = node.numbers
   base = -one_step_closeness(nums)
   depth_penalty = 0.05 * node.depth
   exact_bonus = 2.0 if any(is_24(x) for x in nums) else 0.0
   return base - depth_penalty + exact_bonus

We implement the mathematical logic of the 24-game area. We outline secure operator execution, expression development, purpose checking, and a heuristic scoring perform that estimates how shut a state is to the purpose of 24. We design the heuristic to information the search intelligently whereas penalizing deeper branches.

PROPOSER_PROMPT = """You might be serving to resolve the 24 sport.
We now have present gadgets, every merchandise has an expression and its numeric worth.
Choose TWO gadgets and mix them with one operation from + - * / to create a brand new merchandise.
Return between {okay} and {k2} options as traces utilizing EXACT format:


i,j,op


The place i and j are 0-based indices into the checklist. Use i != j. Favor strikes that assist attain 24.


Present gadgets:
{gadgets}
"""


def llm_generate_suggestions(gadgets: str, k_min: int, k_max: int, max_new_tokens: int = 160) -> str:
   immediate = PROPOSER_PROMPT.format(okay=k_min, k2=k_max, gadgets=gadgets)
   inputs = tokenizer(immediate, return_tensors="pt", truncation=True).to(machine)
   with torch.no_grad():
       out = mannequin.generate(
           **inputs,
           max_new_tokens=max_new_tokens,
           do_sample=True,
           temperature=0.8,
           top_p=0.92,
           num_return_sequences=1,
       )
   txt = tokenizer.decode(out[0], skip_special_tokens=True)
   return txt.strip()


def parse_moves(textual content: str, n_items: int) -> Record[Tuple[int, int, str]]:
   strikes = []
   for line in textual content.splitlines():
       line = line.strip()
       m = re.match(r"^s*(d+)s*,s*(d+)s*,s*([+-*/])s*$", line)
       if not m:
           proceed
       i, j, op = int(m.group(1)), int(m.group(2)), m.group(3)
       if 0 <= i < n_items and 0 <= j < n_items and that i != j:
           strikes.append((i, j, op))
   seen = set()
   uniq = []
   for mv in strikes:
       if mv not in seen:
           uniq.append(mv)
           seen.add(mv)
   return uniq


def fallback_moves(nums: Record[float], restrict: int = 24) -> Record[Tuple[int, int, str]]:
   scored = []
   n = len(nums)
   for i in vary(n):
       for j in vary(n):
           if i == j:
               proceed
           for op in OPS:
               r = safe_apply(nums[i], nums[j], op)
               if r is None:
                   proceed
               scored.append((abs(r - 24.0), i, j, op))
   scored.type(key=lambda x: x[0])
   out = [(i, j, op) for _, i, j, op in scored[:limit]]
   seen, uniq = set(), []
   for mv in out:
       if mv not in seen:
           uniq.append(mv)
           seen.add(mv)
   return uniq

We construct the LLM proposer that generates a number of reasoning branches. We format the immediate rigorously so the mannequin returns structured mix operations and parse these outputs into executable strikes. We additionally implement a deterministic fallback technique to make sure the search stays sturdy even when the mannequin output is noisy.

def apply_move(node: Node, i: int, j: int, op: str) -> Elective[Node]:
   nums = node.numbers[:]
   exprs = node.exprs[:]


   a, b = nums[i], nums[j]
   r = safe_apply(a, b, op)
   if r is None:
       return None


   ea, eb = exprs[i], exprs[j]
   new_expr = combine_expr(ea, eb, op)


   for idx in sorted([i, j], reverse=True):
       nums.pop(idx)
       exprs.pop(idx)


   nums.append(r)
   exprs.append(new_expr)


   baby = Node(
       depth=node.depth + 1,
       numbers=nums,
       exprs=exprs,
       father or mother=node,
       thought=f"Mix merchandise {i} and {j} with '{op}' -> {new_expr} = {r:g}",
   )
   baby.is_goal = (len(nums) == 1 and is_24(nums[0]))
   baby.rating = heuristic_score(baby)
   return baby


def broaden(node: Node, branch_factor: int, proposer_kmin: int = 8, proposer_kmax: int = 14) -> Record[Node]:
   items_str = "n".be a part of([f"{idx}: {node.exprs[idx]} = {node.numbers[idx]:g}" for idx in vary(len(node.numbers))])


   uncooked = llm_generate_suggestions(items_str, proposer_kmin, proposer_kmax)
   strikes = parse_moves(uncooked, len(node.numbers))


   if not strikes:
       strikes = fallback_moves(node.numbers, restrict=30)


   strikes = strikes[: max(branch_factor * 2, branch_factor)]


   kids = []
   for (i, j, op) in strikes:
       ch = apply_move(node, i, j, op)
       if ch will not be None:
           kids.append(ch)


   kids.type(key=lambda x: x.rating, reverse=True)
   return kids[:branch_factor]

We implement the department growth mechanism of the Tree-of-Ideas algorithm. We apply proposed strikes to create new baby nodes and compute their heuristic scores. We then domestically prune weaker branches, retaining solely the strongest candidates for additional exploration.

def reconstruct_solution(purpose: Node) -> Record[str]:
   path = []
   cur = purpose
   whereas cur will not be None:
       if cur.thought:
           path.append(cur.thought)
       cur = cur.father or mother
   return checklist(reversed(path))


def tot_solve_24(
   start_nums: Record[int],
   beam_width: int = 10,
   branch_factor: int = 8,
   max_depth: int = 3,
   prune_threshold: float = -10.0,
   verbose: bool = True
) -> Dict[str, Any]:
   root = Node(
       depth=0,
       numbers=[float(x) for x in start_nums],
       exprs=[str(x) for x in start_nums],
   )
   root.rating = heuristic_score(root)


   beam = [root]
   best_seen = root


   if verbose:
       print("n=== ToT Search Begin ===")
       print("Begin:", pretty_state(root.numbers, root.exprs))
       print("Root rating:", root.rating)


   for d in vary(max_depth):
       candidates: Record[Node] = []


       if verbose:
           print(f"n--- Depth {d} -> {d+1} growth ---")
           print("Beam states:")
           for bidx, b in enumerate(beam[: min(len(beam), 6)]):
               print(f"  [{bidx}] rating={b.rating:.3f} | {pretty_state(b.numbers, b.exprs)}")


       for b in beam:
           children = broaden(b, branch_factor=branch_factor)
           candidates.prolong(children)


       if not candidates:
           break


       candidates = [c for c in candidates if c.score >= prune_threshold]


       objectives = [c for c in candidates if c.is_goal]
       if objectives:
           objectives.type(key=lambda x: x.rating, reverse=True)
           sol = objectives[0]
           steps = reconstruct_solution(sol)
           return {
               "solved": True,
               "begin": start_nums,
               "expression": sol.exprs[0],
               "worth": sol.numbers[0],
               "steps": steps,
               "final_score": sol.rating
           }


       candidates.type(key=lambda x: x.rating, reverse=True)
       beam = candidates[:beam_width]


       if beam and beam[0].rating > best_seen.rating:
           best_seen = beam[0]


       if verbose:
           print("Prime candidates after pruning/beam:")
           for cidx, c in enumerate(beam[: min(len(beam), 6)]):
               print(f"  [{cidx}] rating={c.rating:.3f} | {pretty_state(c.numbers, c.exprs)}")


   best_expr = best_seen.exprs[0] if len(best_seen.exprs) == 1 else " ; ".be a part of(best_seen.exprs)
   best_val = best_seen.numbers[0] if len(best_seen.numbers) == 1 else None
   return {
       "solved": False,
       "begin": start_nums,
       "best_state": pretty_state(best_seen.numbers, best_seen.exprs),
       "best_expression": best_expr,
       "best_value": best_val,
       "final_score": best_seen.rating,
       "notice": "Not solved inside depth/beam limits; improve beam_width/branch_factor or alter pruning."
   }


exams = [
   [4, 1, 8, 7],
   [3, 3, 8, 8],
   [6, 6, 6, 6],
   [9, 9, 4, 4],
]


for nums in exams:
   end result = tot_solve_24(
       nums,
       beam_width=12,
       branch_factor=10,
       max_depth=3,
       prune_threshold=-12.0,
       verbose=True
   )
   print("n=== RESULT ===")
   for okay, v in end result.gadgets():
       if okay == "steps":
           print("steps:")
           for s in v:
               print("  -", s)
       else:
           print(f"{okay}: {v}")
   print("n" + "="*80 + "n")


print("""
To adapt this ToT agent past the 24 sport:
1) Outline a STATE illustration (like numbers/exprs right here).
2) Outline a PROPOSER that generates candidate subsequent steps (LLM instrument or rule-based).
3) Outline a HEURISTIC / SCORER:
  - for checkable duties, use goal scoring
  - for open-ended duties, use an LLM-critic scoring rubric
4) Run the identical ToT loop:
  broaden -> rating -> prune -> maintain high beam -> repeat till purpose or depth restrict.
""")

We implement the complete Tree-of-Ideas search loop utilizing beam search and depth limits. We broaden, rating, prune, and choose the highest branches at every depth till we both attain an answer or exhaust the search finances. Lastly, we reconstruct the reasoning path and show how the agent solves a number of 24-game situations step-by-step.

In conclusion, we constructed a whole multi-branch reasoning agent that demonstrates how Tree-of-Ideas transforms LLM reasoning from a single path right into a structured search course of. We carried out department technology, heuristic scoring, pruning, beam choice, and depth management in a modular structure that may simply be tailored to different reasoning issues. By means of this tutorial, we noticed how combining language fashions with search algorithms considerably improves structured decision-making. We now have a reusable ToT framework that we will prolong to mathematical reasoning, planning duties, symbolic search, and even LLM-critic-based analysis methods.


Try the Full Codes right hereAdditionally, be at liberty to comply with us on Twitter and don’t overlook to affix our 120k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you possibly can be a part of us on telegram as nicely.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles