• 13 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: October 19th, 2023

help-circle
  • counterspellOPtoMTGMTG Deck Legality Checker
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 hours ago

    MTG Deck Legality Web Checker

    A self-contained web tool for validating Magic: The Gathering decklists in a custom format.
    On first launch, the app automatically downloads and processes all required card data from Scryfall.
    No manual setup beyond running the app is needed.

    This project was inspired by Badaro’s Validator GitHub, a simple web tool for checking card lists in “nostalgia” Magic the Gathering formats: https://badaro.github.io/validator


    Features

    • Automatic setup: Downloads Oracle bulk-data from Scryfall and builds legality files on first run.
    • Custom format validation: Checks decks for banned or out-of-format cards.
    • Browser interface: Paste a decklist, click Validate, and view results instantly.

    Installation

    It is recommended to use a virtual environment to keep dependencies isolated.

    1. Clone the Repository

    git clone https://git.disroot.org/hirrolot19/mtg-legality-checker.git
    cd mtg-legality-checker
    

    2. Create and Activate a Virtual Environment

    python -m venv venv
    source venv/bin/activate
    

    3. Install Dependencies

    pip install -r requirements.txt
    

    Running the App

    From the project root (with the virtual environment activated):

    python app.py
    

    Then open your browser and navigate to:

    http://127.0.0.1:5000/
    

    First Run Behavior

    On first launch, the app will:

    1. Download Scryfall’s Oracle card data.
    2. Filter legal cards for the custom format based on a Scryfall query. Default query is f:standard usd<=1 tix<=0.1
    3. Convert the filtered data into a validation JSON file.

    This process may take a few minutes.
    Once complete, cached files are stored persistently for future sessions.


    Using the Web Checker

    1. Paste your decklist into the text box.
    2. Click Validate.
    3. The app displays any cards that are banned or not legal in the format.

    Decklist Rules

    • One card per line.
    • Quantities accepted (4 Lightning Bolt, 2x Opt).
    • Comments start with #.
    • “Deck” or “Sideboard” headers ignored.

    Advanced Usage

    For detailed information about the supporting scripts and command-line tools, see tools/README.md.




  • I’ve managed to write another script that seems to work:

    import json
    import re
    
    def load_legal_cards(json_file):
        """
        Load legal cards from a JSON file with structure:
        { "sets": [], "cards": [], "banned": [] }
        """
        with open(json_file, 'r', encoding='utf-8') as f:
            data = json.load(f)
        legal_cards = [card.lower() for card in data.get('cards', [])]
        banned_cards = [card.lower() for card in data.get('banned', [])] if 'banned' in data else []
        return legal_cards, banned_cards
    
    def clean_line(line):
        """
        Remove quantities, set info, markers, and whitespace
        Skip lines that are section headers like 'Deck', 'Sideboard'
        """
        line = re.sub(r'^\d+\s*x?\s*', '', line)  # "2 " or "2x "
        line = re.sub(r'\(.*?\)', '', line)        # "(SET)"
        line = re.sub(r'\*\w+\*', '', line)        # "*F*"
        line = line.strip()
        if re.match(r'^(deck|sideboard)\s*:?\s*$', line, re.IGNORECASE):
            return None
        return line if line else None
    
    def validate_deck(deck_file, legal_cards, banned_cards):
        """
        Returns a list of illegal cards
        """
        illegal_cards = []
        with open(deck_file, 'r', encoding='utf-8') as f:
            lines = f.readlines()
    
        for line in lines:
            card_name = clean_line(line)
            if not card_name or card_name.startswith("#"):
                continue  # skip empty or comment lines
    
            card_lower = card_name.lower()
            if card_lower in banned_cards or card_lower not in legal_cards:
                illegal_cards.append(card_name)
    
        return illegal_cards
    
    def main():
        legal_cards_file = 'legal_cards.json'   # JSON with "cards" and optional "banned"
        decklist_file = 'decklist.txt'          # Your decklist input
    
        legal_cards, banned_cards = load_legal_cards(legal_cards_file)
        illegal_cards = validate_deck(decklist_file, legal_cards, banned_cards)
    
        if illegal_cards:
            print("Illegal cards:")
            for card in illegal_cards:
                print(card)
    
    if __name__ == "__main__":
        main()
    

  • I exported the Standard Penny collection from Moxfield to JSON using a Python script:

    import csv
    import json
    
    input_csv = 'moxfield_haves_2025-10-21-1123Z.csv'
    output_json = 'standard_penny.json'
    
    sets = set()
    cards = []
    
    with open(input_csv, newline='', encoding='utf-8') as csvfile:
        reader = csv.DictReader(csvfile)
        for row in reader:
            name = row.get('Name')
            edition = row.get('Edition')
            if name:
                cards.append(name)
            if edition:
                sets.add(edition.upper())
    
    sets = sorted(list(sets))
    
    output_data = {
        "sets": sets,
        "cards": cards
    }
    
    with open(output_json, 'w', encoding='utf-8') as jsonfile:
        json.dump(output_data, jsonfile, indent=2)
    
    print(f"JSON saved to {output_json}")
    

    I saved the JSON file as validator/formats/standardpenny.json and added it to the validator’s config:

    { "name": "Standard Penny", "key": "standardpenny", "datafile":"formats/standardpenny.json" },
    

    Then I tried to validate this deck exported as Plain Text from Moxfield and got the error.






  • I’m not aware of a single tool, but you could ensure the deck is standard legal in any normal deck building tool, then additionally check it against the Penny Dreadful deck checker - if it passes both, it should be legal in your format (assuming I understand what you’re doing correctly.)

    Edit: Nevermind, I see you’re limiting it to $1, not $0.01, despite borrowing the name. Penny Dreadful checker won’t work.

    Yeah Penny Dreadful uses tix<=0.02 and this uses both tix<=0.1 and usd<=1



  • ✅ This will create a fully Moxfield-compatible CSV with all cards from a Scryfall search.

    import requests
    import csv
    import time
    
    QUERY = "f:standard f:penny usd<=1"
    BASE_URL = "https://api.scryfall.com/cards/search"
    PARAMS = {
        "q": QUERY,
        "unique": "cards",
        "format": "json"
    }
    
    OUTPUT_FILE = "moxfield_import.csv"
    
    FIELDNAMES = [
        "Count",
        "Tradelist Count",
        "Name",
        "Edition",
        "Condition",
        "Language",
        "Foil",
        "Tags",
        "Last Modified",
        "Collector Number",
        "Alter",
        "Proxy",
        "Purchase Price"
    ]
    
    def fetch_all_cards():
        url = BASE_URL
        params = PARAMS.copy()
        while True:
            resp = requests.get(url, params=params)
            resp.raise_for_status()
            data = resp.json()
            for card in data.get("data", []):
                yield card
            if not data.get("has_more"):
                break
            url = data["next_page"]
            params = None
            time.sleep(0.2)
    
    def write_cards_to_csv(filename):
        with open(filename, "w", newline="", encoding="utf-8") as f:
            writer = csv.DictWriter(f, fieldnames=FIELDNAMES)
            writer.writeheader()
            for card in fetch_all_cards():
                row = {
                    "Count": 1,
                    "Tradelist Count": "",
                    "Name": card.get("name"),
                    "Edition": card.get("set"),
                    "Condition": "",
                    "Language": card.get("lang"),
                    "Foil": "Yes" if card.get("foil") else "No",
                    "Tags": "",
                    "Last Modified": "",
                    "Collector Number": card.get("collector_number"),
                    "Alter": "",
                    "Proxy": "",
                    "Purchase Price": ""
                }
                writer.writerow(row)
    
    if __name__ == "__main__":
        write_cards_to_csv(OUTPUT_FILE)
        print(f"Saved all cards to {OUTPUT_FILE}")
    


  • Is there a deckbuilder that allows using just that list to build decks? How would I import it?

    #!/bin/bash
    
    url="https://api.scryfall.com/cards/search?q=f%3Astandard+f%3Apenny+usd<=1"
    data=()
    
    while [ -n "$url" ]; do
        response=$(curl -s "$url")
        data_chunk=$(echo "$response" | jq -c '.data[]')
        while read -r card; do
            data+=("$card")
        done <<< "$data_chunk"
    
        has_more=$(echo "$response" | jq -r '.has_more')
        if [ "$has_more" = "true" ]; then
            url=$(echo "$response" | jq -r '.next_page')
        else
            url=""
        fi
    done
    
    for card_json in "${data[@]}"; do
        echo "$card_json" | jq -r '.name'
    done
    








  • You can either go here https://mtg.wtf/deck Or the same data is also exported to mtgjson if you want it in JSON format https://mtgjson.com/ The same data is also available in a few other export formats.

    Source data for it is in https://github.com/taw/magic-preconstructed-decks with source URLs for every deck (some of these expired by now and you’d need to go to the Web Archive - WotC redesigns its website every few years, killing old URLs).

    Inferring exact set and collector number based on all available information is done algorithmically.

    Everything should have correct names, quantities, and set codes.

    A few cards won’t have correct collector numbers. The list of cards which are generally expected to not have exact collector number: “Plains”, “Island”, “Swamp”, “Mountain”, “Forest”, “Wastes”, “Azorius Guildgate”, “Boros Guildgate”, “Dimir Guildgate”, “Golgari Guildgate”, “Gruul Guildgate”, “Izzet Guildgate”, “Orzhov Guildgate”, “Rakdos Guildgate”, “Selesnya Guildgate”, “Simic Guildgate”

    For everything else, the algorithm is exact as far as we know. Anything the algorithm can’t detect automatically it flags, and we resolve it manually.

    Tomasz Wegrzanowski

    I noticed that the default deck download format on the website doesn’t include set code and collector number information.

    If you’re fine with JSON, you can use mtgjson, or this file: https://raw.githubusercontent.com/taw/magic-preconstructed-decks-data/master/decks_v2.json (which is exported to mtgjson).

    In case it matters, collector numbers are Gatherer-style not Scryfall-style (so DFCs are 123a / 123b, not 123 etc.). This only really affects cards with multiple parts.

    Do you have any more questions?

    Tomasz Wegrzanowski




  • https://github.com/taw/mtg

    mtg

    Magic the Gathering scripts.

    scripts

    • analyze_deck_colors - reports colors of the deck according to correct algorithm [ http://t-a-w.blogspot.com/2013/03/simple-and-correct-algorithm-for.html ]
    • clean_up_decklist - clean up manually created decklist
    • cod2dck - convert Cockatrice’s .cod to XMage’s .dck
    • cod2txt - convert Cockatrice’s .cod to .txt format
    • txt2cod - convert plaintext deck formats to Cockatrice’s cod
    • txt2dck - convert plaintext deck format to XMage
    • txt2txt - convert plaintext deck format to plaintext deck format (i.e. normalize the decklist)
    • url2cod - download decklists from URL and convert to .cod (a few popular websites supported)
    • url2dck - download decklists from URL and convert to XMage .dck format
    • url2txt - download decklists from URL and convert to .txt format

    data management

    These are used to generate data in data/, you probably won’t need to run them yourself

    • generate_colors_tsv_mtgjson - generate data/colors.tsv from mtgjson’s AllSets-x.json (recommended)
    • generate_colors_tsv_cockatrice - generate data/colors.tsv from cockatrice’s cards.xml (use mtgjson instead)
    • mage_card_map_generator - generate data/mage_cards.txt

  • Here are step-by-step instructions to migrate decks_v2.json to .dck files with the desired structure, assuming no prior knowledge of the command line:

    1. Open a web browser and go to the following link: https://github.com/taw/magic-preconstructed-decks
    2. Click the green “Code” button and select “Download ZIP” to download the repository as a ZIP file.
    3. Extract the ZIP file to a folder on your computer.
    4. Open the folder and create a new file migrate_decks.py.
    5. Right-click on the file and select “Open With” and then choose a text editor such as Notepad or Sublime Text.
    6. Copy the following Python script and paste it into the text editor:
    import json
    import os
    import re
    
    from typing import List, Dict
    
    DECKS_FOLDER = 'Preconstructed Decks'
    
    def load_decks(file_path: str) -> List[Dict]:
        with open(file_path, 'r') as f:
            return json.load(f)
    
    def format_deck_name(name: str) -> str:
        name = name.lower().replace(' ', '_').replace('-', '_')
        return re.sub(r'[^a-z0-9_]', '', name)
    
    def get_deck_info(deck: Dict) -> Dict:
        return {
            'name': format_deck_name(deck['name']),
            'type': deck['type'],
            'set_code': deck['set_code'].upper(),
            'set_name': deck['set_name'],
            'release_date': deck['release_date'],
            'deck_folder': DECKS_FOLDER,
            'cards': deck['cards'],
            'sideboard': deck['sideboard']
        }
    
    def build_deck_text(deck_info: Dict) -> str:
        lines = [
            f'// {deck_info["name"]}',
            f'// Set: {deck_info["set_name"]} ({deck_info["set_code"]})',
            f'// Release Date: {deck_info["release_date"]}',
            '',
        ]
    
        for card in deck_info['cards']:
            lines.append(f'{card["count"]} [{card["set_code"]}:{card["number"]}] {card["name"]}')
    
        lines.append('')
        lines.append('SB:')
    
        for card in deck_info['sideboard']:
            lines.append(f'{card["count"]} [{card["set_code"]}:{card["number"]}] {card["name"]}')
    
        return '\n'.join(lines)
    
    def build_deck_path(deck_info: Dict) -> str:
        return os.path.join(deck_info['deck_folder'],
                            deck_info['type'],
                            deck_info['set_code'])
    
    def write_deck_file(deck_info: Dict, deck_text: str) -> None:
        deck_path = build_deck_path(deck_info)
        os.makedirs(deck_path, exist_ok=True)
    
        filename = f"{deck_info['name']}.dck"
        file_path = os.path.join(deck_path, filename)
    
        with open(file_path, 'w') as f:
            f.write(deck_text)
    
    def migrate_decks(input_file: str, error_file: str) -> None:
        decks = load_decks(input_file)
    
        error_decks: List[Dict] = []
        for deck in decks:
            try:
                deck_info = get_deck_info(deck)
                deck_text = build_deck_text(deck_info)
                write_deck_file(deck_info, deck_text)
            except KeyError:
                error_decks.append(deck)
    
        if error_decks:
            with open(error_file, 'w') as f:
                json.dump(error_decks, f)
    
    if __name__ == '__main__':
        migrate_decks('decks_v2.json', 'error_decks.json')
    
    1. Open a terminal or command prompt on your computer. On Windows, you can do this by pressing the Windows key and typing “cmd” and then pressing Enter.
    2. Navigate to the folder where the decks_v2.json file and the migrate_decks.py file are located. You can do this by typing cd followed by the path to the folder, such as cd C:\Users\YourName\Downloads\magic-preconstructed-decks-master.
    3. Type python migrate_decks.py and press Enter to run the Python script.
    4. Wait for the script to finish running. It will create a .dck file for each deck in the decks_v2.json file, with the desired structure.

    Note: If you don’t have Python installed on your computer, you can download it from the official website: https://www.python.org/downloads/. Choose the latest version for your operating system and follow the installation instructions.


  • https://github.com/taw/magic-preconstructed-decks-data

    This repository contains machine readable decklist data generated from:

    Files

    decks.json has traditional cards + sideboard structure, with commanders reusing sideboard.

    decks_v2.json has cards + sideboard + commander structure. You should use this one.

    Data format

    Data file i a JSON array, with every element representing one deck.

    Fields for each deck:

    • name - deck name
    • type - deck type
    • set_code - mtgjson set code
    • set_name - set name
    • release_date - deck release date (many decks are released much after their set)
    • cards - list of cards in the deck’s mainboard
    • sideboard - list of cards in the deck’s sideboard
    • commander - any commanders deck has (can be multiple for partners)

    Each card is:

    • name - card name
    • set_code - mtgjson set card is from (decks often have cards from multiple sets)
    • number - card collector number
    • foil - is this a foil version
    • count - how many of given card
    • mtgjson_uuid - mtgjson uuid
    • multiverseid - Gatherer multiverseid of card if cards is on Gatherer (optional field)

    Data Limitations

    All precons ever released by Wizards of the Coast should be present, and decklists should always contain right cards, with correct number and foiling, and mainboard/sideboard/commander status.

    Source decklists generally do not say which printing (set and card number) each card is from, so we need to use heuristics to figure that out.

    We use a script to infer most likely set for each card based on some heuristics, and as far as I know, it always matches perfectly.

    That just leaves situation where there are multiple printings of same card in same set.

    If some of the printings are special (full art basics, Jumpstart basics, showcase frames etc.), these have been manually chosen to match every product.

    If you see any errors for anything mentioned above, please report them, so they can be fixed.

    That just leaves the case of multiple non-special printings of same card in same set - most commonly basic lands. In such case one of them is chosen arbitrarily, even though in reality a core set deck with 16 Forests would likely have 4 of each Forest in that core set, not 16 of one of them.

    Feel free to create issue with data on exact priting if you want, but realistically we’ll never get them all, and it’s not something most people care about much.


  • tappedout exports as csv with set information:

    .csv
    Board,Qty,Name,Printing,Foil,Alter,Signed,Condition,Language
    main,1,Beacon Bolt,GRN,,,,,
    
    .dck
    1 [GRN:?] Beacon Bolt
    

    Here is a Python script that reads the .csv file and writes the required format to a .dck file. This script uses the csv module to read the .csv file and write to the .dck file.

    import csv
    
    def read_csv_file(file_path):
        with open(file_path, 'r') as file:
            return list(csv.reader(file))
    
    def write_to_dck_file(file_path, data):
        with open(file_path, 'w') as file:
            file.writelines(data)
    
    def convert_csv_to_dck_format(csv_data):
        csv_header, *csv_rows = csv_data
        return [format_dck_line(row) for row in csv_rows]
    
    def format_dck_line(row):
        quantity, name, printing = row[1], row[2], row[3]
        return f"{quantity} [{printing}:?] {name}\n"
    
    csv_data = read_csv_file('input.csv')
    dck_data = convert_csv_to_dck_format(csv_data)
    write_to_dck_file('output.dck', dck_data)
    

    This script works as follows:

    1. It opens the .csv file in read mode.
    2. It creates a csv reader object to read the .csv file.
    3. It skips the header row using the next() function.
    4. It opens the .dck file in write mode.
    5. For each row in the .csv file, it formats the line as per the .dck file format. The format is “Quantity [Printing:?] Name”. Here, Quantity is the second column in the .csv file, Printing is the fourth column, and Name is the third column.
    6. It writes the formatted line to the .dck file.

    Please replace ‘input.csv’ with the path to your .csv file and ‘output.dck’ with the path where you want to create the .dck file. Run this script in a Python environment, and it will create the .dck file with the required format.