Reorganize task categories from opaque a/b to descriptive names

Replace category_a/category_b directories with algorithm, pipeline,
environment, filesystem, and process. Add separate mode field (solve/convert)
to decouple orchestration from capability grouping. Add per-category
summary and questionnaire breakdowns to both terminal report and HTML export.
This commit is contained in:
Cormac Shannon
2026-03-29 20:59:01 +01:00
parent be8d657b24
commit 20e62f60f6
18 changed files with 487 additions and 167 deletions

View File

@@ -0,0 +1,29 @@
name = "csv_transform"
category = "pipeline"
mode = "convert"
description = """
Read CSV data from stdin. The first line is a header.
Each subsequent line has fields: name,age,city
Print each record as "name is age years old and lives in city", one per line.
Skip the header in the output.
"""
bash_source = """
#!/bin/bash
read -r header # skip header
while IFS=',' read -r name age city || [[ -n "$name" ]]; do
echo "$name is $age years old and lives in $city"
done
"""
[[test_cases]]
stdin = """name,age,city
Alice,30,Paris
Bob,25,London"""
expected_stdout = """Alice is 30 years old and lives in Paris
Bob is 25 years old and lives in London"""
[[test_cases]]
stdin = """name,age,city
Charlie,40,Tokyo"""
expected_stdout = "Charlie is 40 years old and lives in Tokyo"

View File

@@ -0,0 +1,34 @@
name = "log_parser"
category = "pipeline"
mode = "convert"
description = """
Read log lines from stdin. Each line has the format: "LEVEL: message"
where LEVEL is one of ERROR, WARN, INFO.
Count occurrences of each level and print a summary sorted by level name.
Format: "LEVEL: count"
"""
bash_source = """
#!/bin/bash
while IFS= read -r line || [[ -n "$line" ]]; do
echo "${line%%:*}"
done | sort | uniq -c | while read -r count level; do
echo "$level: $count"
done
"""
[[test_cases]]
stdin = """ERROR: disk full
INFO: started
WARN: low memory
ERROR: timeout
INFO: completed"""
expected_stdout = """ERROR: 2
INFO: 2
WARN: 1"""
[[test_cases]]
stdin = """INFO: boot
INFO: ready
INFO: shutdown"""
expected_stdout = "INFO: 3"

View File

@@ -0,0 +1,38 @@
name = "pipeline_transform"
category = "pipeline"
mode = "solve"
description = """
Read lines from stdin. Build a pipeline that:
1. Filters to only lines containing the word "error" (case-insensitive)
2. Extracts the portion after the first colon (trimming leading whitespace)
3. Sorts the results alphabetically
4. Removes duplicate lines
Print the final result to stdout, one line per line.
"""
[[test_cases]]
stdin = """INFO: server started
ERROR: disk full
WARN: low memory
error: connection refused
ERROR: disk full
INFO: request handled
Error: timeout reached"""
expected_stdout = """connection refused
disk full
timeout reached"""
[[test_cases]]
stdin = """ERROR: alpha
ERROR: charlie
ERROR: bravo
ERROR: alpha"""
expected_stdout = """alpha
bravo
charlie"""
[[test_cases]]
stdin = """INFO: all good
WARN: nothing here"""
expected_stdout = ""

View File

@@ -0,0 +1,37 @@
name = "pipeline_word_freq"
category = "pipeline"
mode = "convert"
description = """
Read text from stdin. Count the frequency of each word (case-insensitive, only alphabetic characters count as words).
Print the top 5 most frequent words in descending order of frequency, in the format:
"count word"
If two words have the same count, sort them alphabetically.
If there are fewer than 5 unique words, print all of them.
"""
bash_source = """
#!/bin/bash
tr '[:upper:]' '[:lower:]' | tr -cs '[:alpha:]' '\n' | grep -v '^$' | sort | uniq -c | sort -k1,1rn -k2,2 | head -5 | while read -r count word || [[ -n "$word" ]]; do
echo "$count $word"
done
"""
[[test_cases]]
stdin = """The quick brown fox jumps over the lazy dog.
The dog barked at the fox. The fox ran away."""
expected_stdout = """5 the
3 fox
2 dog
1 at
1 away"""
[[test_cases]]
stdin = "hello hello world"
expected_stdout = """2 hello
1 world"""
[[test_cases]]
stdin = "One one ONE two TWO two Three three three three"
expected_stdout = """4 three
3 one
3 two"""