I’ve managed to write another script that seems to work:
import json
import re
def load_legal_cards(json_file):
"""
Load legal cards from a JSON file with structure:
{ "sets": [], "cards": [], "banned": [] }
"""
with open(json_file, 'r', encoding='utf-8') as f:
data = json.load(f)
legal_cards = [card.lower() for card in data.get('cards', [])]
banned_cards = [card.lower() for card in data.get('banned', [])] if 'banned' in data else []
return legal_cards, banned_cards
def clean_line(line):
"""
Remove quantities, set info, markers, and whitespace
Skip lines that are section headers like 'Deck', 'Sideboard'
"""
line = re.sub(r'^\d+\s*x?\s*', '', line) # "2 " or "2x "
line = re.sub(r'\(.*?\)', '', line) # "(SET)"
line = re.sub(r'\*\w+\*', '', line) # "*F*"
line = line.strip()
if re.match(r'^(deck|sideboard)\s*:?\s*$', line, re.IGNORECASE):
return None
return line if line else None
def validate_deck(deck_file, legal_cards, banned_cards):
"""
Returns a list of illegal cards
"""
illegal_cards = []
with open(deck_file, 'r', encoding='utf-8') as f:
lines = f.readlines()
for line in lines:
card_name = clean_line(line)
if not card_name or card_name.startswith("#"):
continue # skip empty or comment lines
card_lower = card_name.lower()
if card_lower in banned_cards or card_lower not in legal_cards:
illegal_cards.append(card_name)
return illegal_cards
def main():
legal_cards_file = 'legal_cards.json' # JSON with "cards" and optional "banned"
decklist_file = 'decklist.txt' # Your decklist input
legal_cards, banned_cards = load_legal_cards(legal_cards_file)
illegal_cards = validate_deck(decklist_file, legal_cards, banned_cards)
if illegal_cards:
print("Illegal cards:")
for card in illegal_cards:
print(card)
if __name__ == "__main__":
main()
I exported the Standard Penny collection from Moxfield to JSON using a Python script:
import csv
import json
input_csv = 'moxfield_haves_2025-10-21-1123Z.csv'
output_json = 'standard_penny.json'
sets = set()
cards = []
with open(input_csv, newline='', encoding='utf-8') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
name = row.get('Name')
edition = row.get('Edition')
if name:
cards.append(name)
if edition:
sets.add(edition.upper())
sets = sorted(list(sets))
output_data = {
"sets": sets,
"cards": cards
}
with open(output_json, 'w', encoding='utf-8') as jsonfile:
json.dump(output_data, jsonfile, indent=2)
print(f"JSON saved to {output_json}")
I saved the JSON file as validator/formats/standardpenny.json
and added it to the validator’s config:
{ "name": "Standard Penny", "key": "standardpenny", "datafile":"formats/standardpenny.json" },
Then I tried to validate this deck exported as Plain Text from Moxfield and got the error.
When I try to validate a deck, I only see the message “Loading data, please wait…” and nothing happens. So I’m not sure if it’s a problem with my JSON export, the file path, or the validator itself.
Seems simple enought, so I’m going to try and make one myself. Here’s the idea I have so far:
I’m not aware of a single tool, but you could ensure the deck is standard legal in any normal deck building tool, then additionally check it against the Penny Dreadful deck checker - if it passes both, it should be legal in your format (assuming I understand what you’re doing correctly.)
Edit: Nevermind, I see you’re limiting it to $1, not $0.01, despite borrowing the name. Penny Dreadful checker won’t work.
Yeah Penny Dreadful uses tix<=0.02 and this uses both tix<=0.1 and usd<=1
✅ This will create a fully Moxfield-compatible CSV with all cards from a Scryfall search.
import requests
import csv
import time
QUERY = "f:standard f:penny usd<=1"
BASE_URL = "https://api.scryfall.com/cards/search"
PARAMS = {
"q": QUERY,
"unique": "cards",
"format": "json"
}
OUTPUT_FILE = "moxfield_import.csv"
FIELDNAMES = [
"Count",
"Tradelist Count",
"Name",
"Edition",
"Condition",
"Language",
"Foil",
"Tags",
"Last Modified",
"Collector Number",
"Alter",
"Proxy",
"Purchase Price"
]
def fetch_all_cards():
url = BASE_URL
params = PARAMS.copy()
while True:
resp = requests.get(url, params=params)
resp.raise_for_status()
data = resp.json()
for card in data.get("data", []):
yield card
if not data.get("has_more"):
break
url = data["next_page"]
params = None
time.sleep(0.2)
def write_cards_to_csv(filename):
with open(filename, "w", newline="", encoding="utf-8") as f:
writer = csv.DictWriter(f, fieldnames=FIELDNAMES)
writer.writeheader()
for card in fetch_all_cards():
row = {
"Count": 1,
"Tradelist Count": "",
"Name": card.get("name"),
"Edition": card.get("set"),
"Condition": "",
"Language": card.get("lang"),
"Foil": "Yes" if card.get("foil") else "No",
"Tags": "",
"Last Modified": "",
"Collector Number": card.get("collector_number"),
"Alter": "",
"Proxy": "",
"Purchase Price": ""
}
writer.writerow(row)
if __name__ == "__main__":
write_cards_to_csv(OUTPUT_FILE)
print(f"Saved all cards to {OUTPUT_FILE}")
My first try was using this script:
Query Scryfall + dump card names out for easy import into Moxfield
❯ python scryfall_search.py -q "f:standard f:penny usd<=1" --output-as-file "$HOME/desktop/out.csv"
Running Scryfall search on f:standard f:penny usd<=1 legal:commander
Found 1,197 total matches!
But when I tried importing the output csv in Moxfield, I got a bunch of No card name found on line x
errors.
Is there a deckbuilder that allows using just that list to build decks? How would I import it?
#!/bin/bash
url="https://api.scryfall.com/cards/search?q=f%3Astandard+f%3Apenny+usd<=1"
data=()
while [ -n "$url" ]; do
response=$(curl -s "$url")
data_chunk=$(echo "$response" | jq -c '.data[]')
while read -r card; do
data+=("$card")
done <<< "$data_chunk"
has_more=$(echo "$response" | jq -r '.has_more')
if [ "$has_more" = "true" ]; then
url=$(echo "$response" | jq -r '.next_page')
else
url=""
fi
done
for card_json in "${data[@]}"; do
echo "$card_json" | jq -r '.name'
done
The list needs to be static. How can you create decks for a format that is constantly changing? What I need is a way to share a consistent list of legal cards so that everyone can search within the same list, rather than each person having a different version.
Are you answering to the right post?
Forge not only has all the decks with their original printings but also has the capability to play against the AI.
You can either go here https://mtg.wtf/deck Or the same data is also exported to mtgjson if you want it in JSON format https://mtgjson.com/ The same data is also available in a few other export formats.
Source data for it is in https://github.com/taw/magic-preconstructed-decks with source URLs for every deck (some of these expired by now and you’d need to go to the Web Archive - WotC redesigns its website every few years, killing old URLs).
Inferring exact set and collector number based on all available information is done algorithmically.
Everything should have correct names, quantities, and set codes.
A few cards won’t have correct collector numbers. The list of cards which are generally expected to not have exact collector number: “Plains”, “Island”, “Swamp”, “Mountain”, “Forest”, “Wastes”, “Azorius Guildgate”, “Boros Guildgate”, “Dimir Guildgate”, “Golgari Guildgate”, “Gruul Guildgate”, “Izzet Guildgate”, “Orzhov Guildgate”, “Rakdos Guildgate”, “Selesnya Guildgate”, “Simic Guildgate”
For everything else, the algorithm is exact as far as we know. Anything the algorithm can’t detect automatically it flags, and we resolve it manually.
I noticed that the default deck download format on the website doesn’t include set code and collector number information.
If you’re fine with JSON, you can use mtgjson, or this file: https://raw.githubusercontent.com/taw/magic-preconstructed-decks-data/master/decks_v2.json (which is exported to mtgjson).
In case it matters, collector numbers are Gatherer-style not Scryfall-style (so DFCs are 123a / 123b, not 123 etc.). This only really affects cards with multiple parts.
Do you have any more questions?
You can download the .txt file for each decklist from MTGGoldfish by clicking on the “Download > Exact Card Versions (Tabletop)” button. However, please note that these files may not be compatible with Xmage due to differences in formatting. Nonetheless, creating a conversion script should not be too difficult.
mtg
Magic the Gathering scripts.
scripts
analyze_deck_colors
- reports colors of the deck according to correct algorithm [ http://t-a-w.blogspot.com/2013/03/simple-and-correct-algorithm-for.html ]clean_up_decklist
- clean up manually created decklistcod2dck
- convert Cockatrice’s .cod to XMage’s .dckcod2txt
- convert Cockatrice’s .cod to .txt formattxt2cod
- convert plaintext deck formats to Cockatrice’s codtxt2dck
- convert plaintext deck format to XMagetxt2txt
- convert plaintext deck format to plaintext deck format (i.e. normalize the decklist)url2cod
- download decklists from URL and convert to .cod (a few popular websites supported)url2dck
- download decklists from URL and convert to XMage .dck formaturl2txt
- download decklists from URL and convert to .txt formatdata management
These are used to generate data in
data/
, you probably won’t need to run them yourself
generate_colors_tsv_mtgjson
- generatedata/colors.tsv
from mtgjson’s AllSets-x.json (recommended)generate_colors_tsv_cockatrice
- generatedata/colors.tsv
from cockatrice’s cards.xml (use mtgjson instead)mage_card_map_generator
- generatedata/mage_cards.txt
Here are step-by-step instructions to migrate decks_v2.json
to .dck
files with the desired structure, assuming no prior knowledge of the command line:
migrate_decks.py
.import json
import os
import re
from typing import List, Dict
DECKS_FOLDER = 'Preconstructed Decks'
def load_decks(file_path: str) -> List[Dict]:
with open(file_path, 'r') as f:
return json.load(f)
def format_deck_name(name: str) -> str:
name = name.lower().replace(' ', '_').replace('-', '_')
return re.sub(r'[^a-z0-9_]', '', name)
def get_deck_info(deck: Dict) -> Dict:
return {
'name': format_deck_name(deck['name']),
'type': deck['type'],
'set_code': deck['set_code'].upper(),
'set_name': deck['set_name'],
'release_date': deck['release_date'],
'deck_folder': DECKS_FOLDER,
'cards': deck['cards'],
'sideboard': deck['sideboard']
}
def build_deck_text(deck_info: Dict) -> str:
lines = [
f'// {deck_info["name"]}',
f'// Set: {deck_info["set_name"]} ({deck_info["set_code"]})',
f'// Release Date: {deck_info["release_date"]}',
'',
]
for card in deck_info['cards']:
lines.append(f'{card["count"]} [{card["set_code"]}:{card["number"]}] {card["name"]}')
lines.append('')
lines.append('SB:')
for card in deck_info['sideboard']:
lines.append(f'{card["count"]} [{card["set_code"]}:{card["number"]}] {card["name"]}')
return '\n'.join(lines)
def build_deck_path(deck_info: Dict) -> str:
return os.path.join(deck_info['deck_folder'],
deck_info['type'],
deck_info['set_code'])
def write_deck_file(deck_info: Dict, deck_text: str) -> None:
deck_path = build_deck_path(deck_info)
os.makedirs(deck_path, exist_ok=True)
filename = f"{deck_info['name']}.dck"
file_path = os.path.join(deck_path, filename)
with open(file_path, 'w') as f:
f.write(deck_text)
def migrate_decks(input_file: str, error_file: str) -> None:
decks = load_decks(input_file)
error_decks: List[Dict] = []
for deck in decks:
try:
deck_info = get_deck_info(deck)
deck_text = build_deck_text(deck_info)
write_deck_file(deck_info, deck_text)
except KeyError:
error_decks.append(deck)
if error_decks:
with open(error_file, 'w') as f:
json.dump(error_decks, f)
if __name__ == '__main__':
migrate_decks('decks_v2.json', 'error_decks.json')
decks_v2.json
file and the migrate_decks.py
file are located. You can do this by typing cd
followed by the path to the folder, such as cd C:\Users\YourName\Downloads\magic-preconstructed-decks-master
.python migrate_decks.py
and press Enter to run the Python script..dck
file for each deck in the decks_v2.json
file, with the desired structure.Note: If you don’t have Python installed on your computer, you can download it from the official website: https://www.python.org/downloads/. Choose the latest version for your operating system and follow the installation instructions.
https://github.com/taw/magic-preconstructed-decks-data
This repository contains machine readable decklist data generated from:
Files
decks.json
has traditional cards + sideboard structure, with commanders reusing sideboard.
decks_v2.json
has cards + sideboard + commander structure. You should use this one.Data format
Data file i a JSON array, with every element representing one deck.
Fields for each deck:
- name - deck name
- type - deck type
- set_code - mtgjson set code
- set_name - set name
- release_date - deck release date (many decks are released much after their set)
- cards - list of cards in the deck’s mainboard
- sideboard - list of cards in the deck’s sideboard
- commander - any commanders deck has (can be multiple for partners)
Each card is:
- name - card name
- set_code - mtgjson set card is from (decks often have cards from multiple sets)
- number - card collector number
- foil - is this a foil version
- count - how many of given card
- mtgjson_uuid - mtgjson uuid
- multiverseid - Gatherer multiverseid of card if cards is on Gatherer (optional field)
Data Limitations
All precons ever released by Wizards of the Coast should be present, and decklists should always contain right cards, with correct number and foiling, and mainboard/sideboard/commander status.
Source decklists generally do not say which printing (set and card number) each card is from, so we need to use heuristics to figure that out.
We use a script to infer most likely set for each card based on some heuristics, and as far as I know, it always matches perfectly.
That just leaves situation where there are multiple printings of same card in same set.
If some of the printings are special (full art basics, Jumpstart basics, showcase frames etc.), these have been manually chosen to match every product.
If you see any errors for anything mentioned above, please report them, so they can be fixed.
That just leaves the case of multiple non-special printings of same card in same set - most commonly basic lands. In such case one of them is chosen arbitrarily, even though in reality a core set deck with 16 Forests would likely have 4 of each Forest in that core set, not 16 of one of them.
Feel free to create issue with data on exact priting if you want, but realistically we’ll never get them all, and it’s not something most people care about much.
tappedout exports as csv with set information:
.csv
Board,Qty,Name,Printing,Foil,Alter,Signed,Condition,Language
main,1,Beacon Bolt,GRN,,,,,
.dck
1 [GRN:?] Beacon Bolt
Here is a Python script that reads the .csv file and writes the required format to a .dck file. This script uses the csv module to read the .csv file and write to the .dck file.
import csv
def read_csv_file(file_path):
with open(file_path, 'r') as file:
return list(csv.reader(file))
def write_to_dck_file(file_path, data):
with open(file_path, 'w') as file:
file.writelines(data)
def convert_csv_to_dck_format(csv_data):
csv_header, *csv_rows = csv_data
return [format_dck_line(row) for row in csv_rows]
def format_dck_line(row):
quantity, name, printing = row[1], row[2], row[3]
return f"{quantity} [{printing}:?] {name}\n"
csv_data = read_csv_file('input.csv')
dck_data = convert_csv_to_dck_format(csv_data)
write_to_dck_file('output.dck', dck_data)
This script works as follows:
next()
function.Please replace ‘input.csv’ with the path to your .csv file and ‘output.dck’ with the path where you want to create the .dck file. Run this script in a Python environment, and it will create the .dck file with the required format.
MTG Deck Legality Web Checker
A self-contained web tool for validating Magic: The Gathering decklists in a custom format.
On first launch, the app automatically downloads and processes all required card data from Scryfall.
No manual setup beyond running the app is needed.
Features
Installation
It is recommended to use a virtual environment to keep dependencies isolated.
1. Clone the Repository
git clone https://git.disroot.org/hirrolot19/mtg-legality-checker.git cd mtg-legality-checker
2. Create and Activate a Virtual Environment
python -m venv venv source venv/bin/activate
3. Install Dependencies
Running the App
From the project root (with the virtual environment activated):
Then open your browser and navigate to:
http://127.0.0.1:5000/
First Run Behavior
On first launch, the app will:
This process may take a few minutes.
Once complete, cached files are stored persistently for future sessions.
Using the Web Checker
Decklist Rules
4 Lightning Bolt
,2x Opt
).#
.Advanced Usage
For detailed information about the supporting scripts and command-line tools, see
tools/README.md
.