<RETURN_TO_BASE

Local, API-Free ML: Build a Robust End-to-End Pipeline with MLE-Agent and Ollama

'Step-by-step guide to create a fully local ML workflow using MLE-Agent and Ollama in Colab, with automated code generation, sanitization and a safe fallback training script.'

Helper to run shell commands

Start by defining a small helper that runs shell commands from Python, prints output and raises on failure so the workflow is easy to monitor:

import os, re, time, textwrap, subprocess, sys
from pathlib import Path
 
 
def sh(cmd, check=True, env=None, cwd=None):
   print(f"$ {cmd}")
   p = subprocess.run(cmd, shell=True, env={**os.environ, **(env or {})} if env else None,
                      cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
   print(p.stdout)
   if check and p.returncode!=0: raise RuntimeError(p.stdout)
   return p.stdout

Workspace, dependencies and Ollama

Set up a reproducible Colab workspace, install the precise Python packages, install Ollama and pull a local model. Keeping Ollama running locally avoids external API keys:

WORK=Path("/content/mle_colab_demo"); WORK.mkdir(parents=True, exist_ok=True)
PROJ=WORK/"proj"; PROJ.mkdir(exist_ok=True)
DATA=WORK/"data.csv"; MODEL=WORK/"model.joblib"; PREDS=WORK/"preds.csv"
SAFE=WORK/"train_safe.py"; RAW=WORK/"agent_train_raw.py"; FINAL=WORK/"train.py"
MODEL_NAME=os.environ.get("OLLAMA_MODEL","llama3.2:1b")
 
 
sh("pip -q install --upgrade pip")
sh("pip -q install mle-agent==0.4.* scikit-learn pandas numpy joblib")
 
 
sh("curl -fsSL https://ollama.com/install.sh | sh")
sv = subprocess.Popen("ollama serve", shell=True)
time.sleep(4); sh(f"ollama pull {MODEL_NAME}")

Generate a tiny synthetic dataset and prompt the agent

Create a small labeled dataset and set environment variables so you can drive MLE-Agent through Ollama locally. Craft a strict prompt that requests a single fenced Python code block implementing train.py:

import numpy as np, pandas as pd
np.random.seed(0)
n=500; X=np.random.rand(n,4); y=(X@np.array([0.4,-0.2,0.1,0.5])+0.15*np.random.randn(n)>0.55).astype(int)
pd.DataFrame(np.c_[X,y], columns=["f1","f2","f3","f4","target"]).to_csv(DATA, index=False)
 
 
env = {"OPENAI_API_KEY":"", "ANTHROPIC_API_KEY":"", "GEMINI_API_KEY":"",
      "OLLAMA_HOST":"http://127.0.0.1:11434", "MLE_LLM_ENGINE":"ollama","MLE_MODEL":MODEL_NAME}
prompt=f"""Return ONE fenced python code block only.
Write train.py that reads {DATA}; 80/20 split (random_state=42, stratify);
Pipeline: SimpleImputer + StandardScaler + LogisticRegression(class_weight='balanced', max_iter=1000, random_state=42);
Print ROC-AUC & F1; print sorted coefficient magnitudes; save model to {MODEL} and preds to {PREDS};
Use only sklearn, pandas, numpy, joblib; no extra text."""
def extract(txt:str)->str|None:
   txt=re.sub(r"x1B[[0-?]*[ -/]*[@-~]", "", txt)
   m=re.search(r"```(?:python)?s*([sS]*?)```", txt, re.I)
   if m: return m.group(1).strip()
   if txt.strip().lower().startswith("python"): return txt.strip()[6:].strip()
   m=re.search(r"(?:^|n)(froms+[^n]+|imports+[^n]+)([sS]*)", txt);
   return (m.group(1)+m.group(2)).strip() if m else None
 
 
out = sh(f'printf %s "{prompt}" | mle chat', check=False, cwd=str(PROJ), env=env)
code = extract(out) or sh(f'printf %s "{prompt}" | ollama run {MODEL_NAME}', check=False, env=env)
code = extract(code) if code and not isinstance(code, str) else (code or "")
(Path(RAW)).write_text(code or "", encoding="utf-8")

This saves the raw agent output for inspection and further sanitization.

Sanitize the agent output and prepare a safe fallback

LLM-generated code often has small import mistakes or stray prefixes. Define a sanitizer that tries to fix common patterns and also prepare a deterministic fallback training script that will always run:

def sanitize(src:str)->str:
   if not src: return ""
   s = src
   s = re.sub(r"r","",s)
   s = re.sub(r"^pythonb","",s.strip(), flags=re.I).strip()
   fixes = {
       r"froms+sklearn.pipelines+imports+SimpleImputer": "from sklearn.impute import SimpleImputer",
       r"froms+sklearn.preprocessings+imports+SimpleImputer": "from sklearn.impute import SimpleImputer",
       r"froms+sklearn.pipelines+imports+StandardScaler": "from sklearn.preprocessing import StandardScaler",
       r"froms+sklearn.preprocessings+imports+ColumnTransformer": "from sklearn.compose import ColumnTransformer",
       r"froms+sklearn.pipelines+imports+ColumnTransformer": "from sklearn.compose import ColumnTransformer",
   }
   for pat,rep in fixes.items(): s = re.sub(pat, rep, s)
   if "SimpleImputer" in s and "from sklearn.impute import SimpleImputer" not in s:
       s = "from sklearn.impute import SimpleImputern"+s
   if "StandardScaler" in s and "from sklearn.preprocessing import StandardScaler" not in s:
       s = "from sklearn.preprocessing import StandardScalern"+s
   if "ColumnTransformer" in s and "from sklearn.compose import ColumnTransformer" not in s:
       s = "from sklearn.compose import ColumnTransformern"+s
   if "train_test_split" in s and "from sklearn.model_selection import train_test_split" not in s:
       s = "from sklearn.model_selection import train_test_splitn"+s
   if "joblib" in s and "import joblib" not in s: s = "import joblibn"+s
   return s
 
 
san = sanitize(code)
 
 
safe = textwrap.dedent(f"""
import pandas as pd, numpy as np, joblib
from pathlib import Path
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score, f1_score
from sklearn.compose import ColumnTransformer
 
 
DATA=Path("{DATA}"); MODEL=Path("{MODEL}"); PREDS=Path("{PREDS}")
df=pd.read_csv(DATA); X=df.drop(columns=["target"]); y=df["target"].astype(int)
num=X.columns.tolist()
pre=ColumnTransformer([("num",Pipeline([("imp",SimpleImputer()),("sc",StandardScaler())]),num)])
clf=LogisticRegression(class_weight='balanced', max_iter=1000, random_state=42)
pipe=Pipeline([("pre",pre),("clf",clf)])
Xtr,Xte,ytr,yte=train_test_split(X,y,test_size=0.2,random_state=42,stratify=y)
pipe.fit(Xtr,ytr)
proba=pipe.predict_proba(Xte)[:,1]; pred=(proba>=0.5).astype(int)
print("ROC-AUC:",round(roc_auc_score(yte,proba),4)); print("F1:",round(f1_score(yte,pred),4))
import pandas as pd
coef=pd.Series(pipe.named_steps["clf"].coef_.ravel(), index=num).abs().sort_values(ascending=False)
print("Top coefficients by |magnitude|:\n", coef.to_string())
joblib.dump(pipe,MODEL)
pd.DataFrame({{"y_true":yte.reset_index(drop=True),"y_prob":proba,"y_pred":pred}}).to_csv(PREDS,index=False)
print("Saved:",MODEL,PREDS)
""").strip()

The safe script is a fully deterministic training routine that uses scikit-learn's pipeline, evaluates ROC-AUC and F1, prints coefficient magnitudes, and saves model and predictions.

Choose between sanitized agent code and the safe fallback, then run

Decide whether the sanitized agent output is good enough or use the fallback. Save both versions for auditing, then run the chosen train.py and inspect artifacts:

chosen = san if ("import " in san and "sklearn" in san and "read_csv" in san) else safe
Path(SAFE).write_text(safe, encoding="utf-8")
Path(FINAL).write_text(chosen, encoding="utf-8")
print("n=== Using train.py (first 800 chars) ===n", chosen[:800], "n...")
 
 
sh(f"python {FINAL}")
print("nArtifacts:", [str(p) for p in WORK.glob('*')])
print(" Done — outputs in", WORK)

This whole flow demonstrates how to combine local LLM-driven code generation with deterministic safety checks and a guaranteed fallback so training completes reliably without external API keys.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский