diff --git a/README.md b/README.md index 49baf288..52cfd43a 100644 --- a/README.md +++ b/README.md @@ -10,14 +10,11 @@ ABCE is a module to perform agent-based capacity expansion (CE) modeling for ele * [Installation](#installation) - [Linux / MacOS / Windows Subsystem for Linux](#linux--macos--windows-subsystem-for-linux) - [Windows 10](#windows-10) - - [Optional, Argonne only: installing with A-LEAF](#optional--argonne-only-installing-with-a-leaf) - [Optional: installing with CPLEX](#optional-installing-with-cplex) * [Usage](#usage) - [Running ABCE](#running-abce) - [Input files](#input-files) - [Outputs](#outputs) -* [Contributing](#contributing) -* [Testing](#testing) * [License](#license) ## Installation @@ -28,33 +25,25 @@ ABCE is a module to perform agent-based capacity expansion (CE) modeling for ele `git clone https://github.com/abce-dev/abce` -2. If using A-LEAF, see the optional [Installing with A-LEAF](#optional--argonne-only-installing-with-a-leaf) section below. Currently, only users with Argonne gitlab credentials can use A-LEAF, but a public release is coming soon! +2. If using CPLEX, see the optional [Installing with CPLEX](#optional-installing-with-cplex) section below -3. If using CPLEX, see the optional [Installing with CPLEX](#optional-installing-with-cplex) section below - -4. Inside your local `abce` directory, run the installation script with: +3. Inside your local `abce` directory, run the installation script with: `bash ./install.sh` -5. When prompted for the A-LEAF repository, do one of the following: - - * Enter the absolute path to the directory where you cloned A-LEAF, or +4. Wait for the installation script to run to completion. Review any errors/issues printed for your reference at the end of execution. - * Press Enter without entering any text, if not using A-LEAF +5. Restart your terminal session, or re-source your `.bashrc` file. -6. Wait for the installation script to run to completion. Review any errors/issues printed for your reference at the end of execution. - -7. Restart your terminal session, or re-source your `.bashrc` file. - -8. If using Conda to manage environments, activate the ABCE conda environment with: +6. If using Conda to manage environments, activate the ABCE conda environment with: `conda activate abce_env` -9. Rerun the installation script to complete the environment setup: +7. Rerun the installation script to complete the environment setup: `bash ./install.sh` -10. Test the installation using one of the examples: +8. Test the installation using one of the examples: `cd examples/single_agent_example` @@ -64,7 +53,7 @@ ABCE is a module to perform agent-based capacity expansion (CE) modeling for ele `python ../../run.py --settings_file=./settings.yml --inputs_path=.` -11. Once the previous command runs to completion without failing, generate a precompiled Julia sysimage file from within the `abce/env` directory. +9. Once the previous command runs to completion without failing, generate a precompiled Julia sysimage file from within the `abce/env` directory. `julia make_sysimage.jl` @@ -75,25 +64,23 @@ ABCE is a module to perform agent-based capacity expansion (CE) modeling for ele 2. Download and install [Julia 1.8](https://julialang.org/downloads/). Check the box in the installer to add Julia to the PATH. -3. If using A-LEAF, see the optional [Installing with A-LEAF](#optional--argonne-only-installing-with-a-leaf) section below. Currently, only users with Argonne gitlab credentials can use A-LEAF, but a public release is coming soon! - -4. If using CPLEX, see the optional [Installing with CPLEX](#optional-installing-with-cplex) section below +3. If using CPLEX, see the optional [Installing with CPLEX](#optional-installing-with-cplex) section below -5. Using the Anaconda Powershell, clone this repo to your local machine: +4. Using the Anaconda Powershell, clone this repo to your local machine: `git clone https://github.com/abce-dev/abce` -6. Create the local conda environment: +5. Create the local conda environment: `conda env create -f .\environment_win.yml` -7. Activate the `abce_anv` conda environment: +6. Activate the `abce_anv` conda environment: `conda activate abce_env` -8. Set the `ABCE_DIR` environment variable to the absolute path to your `abce` repo (e.g. `C:\Users\myname\abce`) +7. Set the `ABCE_DIR` environment variable to the absolute path to your `abce` repo (e.g. `C:\Users\myname\abce`) -9. Test the installation using one of the examples: +8. Test the installation using one of the examples: `cd examples\single_agent_example` @@ -103,30 +90,10 @@ ABCE is a module to perform agent-based capacity expansion (CE) modeling for ele `python ..\..\run.py --settings_file=.\settings.yml --inputs_path=.` -10. Once the previous command runs to completion without failing, generate a precompiled Julia sysimage file from within the `abce\env` directory. - - `julia make_sysimage.jl` - -### Optional / Argonne only: installing with A-LEAF - -1. Clone the ABCE A-LEAF repo: - - `git clone git-out.gss.anl.gov/kbiegel/kb_aleaf` - -2. Inside the A-LEAF directory, run the A-LEAF environment setup script: - - `julia make_julia_environment.jl` - -3. Test the A-LEAF install by running the following within your A-LEAF directory: - - `julia execute_ALEAF.jl` - -4. Once the previous command runs to completion without failing, generate a precompiled Julia sysimage file with: +9. Once the previous command runs to completion without failing, generate a precompiled Julia sysimage file from within the `abce\env` directory. `julia make_sysimage.jl` -5. Set the `ALEAF_DIR` environment variable to the absolute path to your A-LEAF repo - ### Optional: installing with CPLEX 1. Download the [CPLEX (IBM ILOG STUDIO 20.10) binaries](https://www.ibm.com/docs/en/icos/20.1.0?topic=cplex-installing) @@ -226,14 +193,6 @@ ABCE also outputs plots of the evolution of the overall system portfolio and the ### Use ABCE with `watts` The[ Workflow and Template Toolkit for Simulations (`watts`)](https://github.com/watts-dev/watts) has a plugin for ABCE. Please see the `watts` documentation for usage. This workflow tool is useful for conducting sensitivity analyses and other experiments with ABCE. -## Testing - -### Julia Unit Tests - -Julia tests may be run with the following command from within the `test/` directory: - -`bash run_julia_tests.sh` - ## License Copyright 2023 Argonne National Laboratory diff --git a/inputs/ALEAF_settings.yml b/inputs/ALEAF_settings.yml deleted file mode 100644 index 3d338137..00000000 --- a/inputs/ALEAF_settings.yml +++ /dev/null @@ -1,282 +0,0 @@ -ALEAF_Master_LC_GEP: - LC_GEP_settings: - const_name_flag: "TRUE" - export_model_lp_expansion_flag: "FALSE" - export_model_lp_operation_flag: "FALSE" - model_lp_file_name_value: ALEAF_LC_GEP_model_instance.lp - planning_design: - current_year_value: 2022 - final_year_value: 2022 - numstages_value: 1 - loadincreaserate_base_value: 0 - WACC_value: 0.058 - planning_reserve_margin: 0.1375 - enforce_min_reserve_margin_flag: "FALSE" - num_days_per_OP_run_value: 1 - regulation_cost_scale_value: 0.2 - CTAX_in_MC_calculation_flag: "FALSE" - simulation_settings: - test_system_name: ERCOT - fuel_price_projection_flag: "FALSE" - capex_projection_flag: "FALSE" - generate_networkdata_flag: "TRUE" - PDTF_threshold_value: 0.01 - power_flow_mode_flag: "None" - bus_mapping_flag: "FALSE" - network_reduction_flag: "FALSE" - conduct_scenario_reduction_flag: "TRUE" - big_M_value: 1.00e-5 - run_expansion_flag: "TRUE" - run_operation_flag: "TRUE" - num_scenario_value: 1 - per_unit_base_value: 1 - generation_outage_flag: "FALSE" - scenario_settings: - scenario_name: ALEAF_ERCOT - ATB_Setting: ATB_ID_1 - NDAYS: 7 - NDAYS_OP: 365 - HFREQ: 1 - FIVEMIN: 0 - FIVEMIN_OP: 0 - peak_demand: 30000 - VOLL: 9000 - SRSP: 1100 - NSRSP: 100 - MAXENS: 0 - MAXENSI: 9 - SCFC: 10000 - MAXEMS: 9999 - carbon_tax: 0 - RPS_percentage: 0 - wind_PTC: 0 - solar_PTC: 0 - nuclear_PTC: 0 - wind_ITC: 0 - solar_ITC: 0 - nuclear_ITC: 0 - CAPPMT: 0 - ALEAF_relative_file_paths: - data_path: "data/LC_GEP" - output_file_path: "output/LC_GEP" - time_series_file_name: timeseries_hourly.csv - timeseries_data_load_path: "timeseries_data_files/Load/timeseries_load_hourly.csv" - timeseries_data_wind_path: "timeseries_data_files/WIND/timeseries_wind_hourly.csv" - timeseries_data_pv_path: "timeseries_data_files/PV/timeseries_pv_hourly.csv" - reserve_requirement_data_reg_path: "timeseries_data_files/Reserves/timeseries_reg_hourly.csv" - reserve_requirement_data_spin_path: "timeseries_data_files/Reserves/timeseries_spin_hourly.csv" - reserve_requirement_data_nspin_path: "timeseries_data_files/Reserves/timeseries_nspin_hourly.csv" - timeseries_data_load_5mins_path: "timeseries_data_files/Load/timeseries_load_5mins.csv" - timeseries_data_wind_5mins_path: "timeseries_data_files/WIND/timeseries_wind_5mins.csv" - timeseries_data_pv_5mins_path: "timeseries_data_files/PV/timeseries_pv_5mins.csv" - reserve_requirement_data_reg_5mins_path: "timeseries_data_files/Reserves/timeseries_reg_5mins.csv" - reserve_requirement_data_spin_5mins_path: "timeseries_data_files/Reserves/timeseries_spin_5mins.csv" - reserve_requirement_data_nspin_5mins_path: "timeseries_data_files/Reserves/timeseries_nspin_5mins.csv" - timeseries_data_gen_outages_path: "timeseries_data_files/Outage/timeseries_gen_outage_hourly.csv" - scenario_reduction_settings: - time_resolution: Hourly - fixing_extreme_days_flag: "TRUE" - generate_input_data_flag: "TRUE" - generate_duration_curve_flag: "FALSE" - input_type_load_shape_flag: "TRUE" - input_type_load_MWh_flag: "FALSE" - input_type_wind_shape_flag: "TRUE" - input_type_wind_MWh_flag: "FALSE" - input_type_solar_shape_flag: "TRUE" - input_type_solar_MWh_flag: "FALSE" - input_type_net_load_MWh_flag: "FALSE" -ALEAF_Master: - ALEAF_Master_setup: - model_type: LC_GEP - Master_data_file_name: ALEAF_Master_LC_GEP.xlsx - solver_name: CPLEX - CPLEX_settings: - CPX_PARAM_PARAMDISPLAY: 1 - CPX_PARAM_EPGAP: 0.0001 - CPX_PARAM_MIPDISPLAY: 2 - CPX_PARAM_SCRIND: 0 - CPXPARAM_TimeLimit: 6000 - CPX_PARAM_EPRHS: 1.00e-6 - CPX_PARAM_THREADS: 4 - CPX_PARAM_NUMERICALEMPHASIS: 0 - CPX_PARAM_PREIND: 1 - CPX_PARAM_SCAIND: 1 - CPXPARAM_Simplex_Limits_Singularity: 10 - CPXPARAM_MIP_Tolerances_Integrality: 1.00e-5 - CPX_PARAM_AGGIND: -1 - CPXPARAM_Simplex_Tolerances_Markowitz: 0.9 - CPX_PARAM_LPMETHOD: 6 - CPX_PARAM_MIPEMPHASIS: 0 - CPX_PARAM_QTOLININD: 1 - CPXPARAM_MIP_Strategy_VariableSelect: 4 - CPXPARAM_MIP_Strategy_StartAlgorithm: 4 - CPXPARAM_MIP_Strategy_NodeSelect: 1 - CPXPARAM_WorkMem: 2048 - CPXPARAM_MIP_Strategy_File: 1 - HiGHS_settings: - time_limit: 6000 - presolve: on - Gurobi_settings: - TimeLimit: 200 - MIPGap: 0.001 - OutputFlag: 1 - IntFeasTol: 1.00e-5 - FeasibilityTol: 1.00e-6 - DualReductions: 0 - Presolve: -1 - Aggregate: 1 - Threads: 4 - MIPFocus: 0 - NumericFocus: 0.0 - Cuts: 3.0 - GLPK_settings: - tm_lim: 60000 - msg_lev: 4 - CBC_settings: - seconds: 30 -ALEAF_portfolio: - grid_settings: - num_lines_value: 0 - num_zones_value: 1 - num_sub_area: 1 - buses: - bus_i: 1 - bus_name: bus_ABC - baseKV: 345 - bus type: ref - MW load: 0 - MVAR load: 0 - area: 1 - sub area: 1 - zone: 1 - lat: 0 - lng: 0 - EAS_market_zone: 1 - branch: - UID: A1 - f_bus: 1 - t_bus: 2 - br_r: 0.003 - br_x: 0.014 - br_b: 0.461 - rate_a: 175 - rate_b: 193 - rate_c: 200 - OutRate: 0.24 - Duration: 16 - Tr Ratio: 0 - OutRate: 0 - Length: 3 - model_flag: "TRUE" - sub_area: - sub_area_i: 1 - bus_name: sub_area_1 - baseKV: 345 - bus_type: ref - area: 1 - sub_area: 11 - zone: 11 - EAS_market_zone: 1 - sub_area_mapping: - bus_i: 1 - sub_area: 1 - system_portfolio: - wind: - "GEN UID": 1_wind_1 - bus_i: 1 - Tech_ID: Tech1 - "GenCo ID": 1 - UNITGROUP: WIND - UNIT_CATEGORY: Wind - UNIT_TYPE: Wind - FUEL: Wind - EXUNITS: 279 - CAP: 100 - MAXINVEST: 0 - MININVEST: 0 - MINRET: 0 - solar: - "GEN UID": 1_solar_1 - bus_i: 1 - Tech_ID: Tech2 - "GenCo ID": 1 - UNITGROUP: PV - UNIT_CATEGORY: "Solar PV" - UNIT_TYPE: Solar - FUEL: Solar - EXUNITS: 83 - CAP: 100 - MAXINVEST: 0 - MININVEST: 0 - MINRET: 0 - coal: - "GEN UID": 1_coal_1 - bus_i: 1 - Tech_ID: Tech3 - "GenCo ID": 1 - UNITGROUP: COAL - UNIT_CATEGORY: Coal - UNIT_TYPE: Coal - FUEL: Coal - EXUNITS: 31 - CAP: 500 - MAXINVEST: 0 - MININVEST: 0 - MINRET: 0 - ngcc: - "GEN UID": 1_ngcc_1 - bus_i: 1 - Tech_ID: Tech4 - "GenCo ID": 1 - UNITGROUP: NGCC - UNIT_CATEGORY: NGCC - UNIT_TYPE: NGCC - FUEL: Gas - EXUNITS: 226 - CAP: 200 - MAXINVEST: 0 - MININVEST: 0 - MINRET: 0 - ngct: - "GEN UID": 1_ngct_1 - bus_i: 1 - Tech_ID: Tech5 - "GenCo ID": 1 - UNITGROUP: NGCT - UNIT_CATEGORY: NGCT - UNIT_TYPE: NGCT - FUEL: NGCT - EXUNITS: 175 - CAP: 50 - MAXINVEST: 0 - MININVEST: 0 - MINRET: 0 - conventional_nuclear: - "GEN UID": 1_nuclear_C - bus_i: 1 - Tech_ID: Tech6 - "GenCo ID": 1 - UNITGROUP: ConventionalNuclear - UNIT_CATEGORY: ConventionalNuclear - UNIT_TYPE: ConventionalNuclear - FUEL: Nuclear - EXUNITS: 13 - CAP: 400 - MAXINVEST: 0 - MININVEST: 0 - MINRET: 0 - advanced_nuclear: - "GEN UID": 1_nuclear_A - bus_i: 1 - Tech_ID: Tech7 - "GenCo ID": 1 - UNITGROUP: AdvancedNuclear - UNIT_CATEGORY: AdvancedNuclear - UNIT_TYPE: AdvancedNuclear - FUEL: Nuclear - EXUNITS: 0 - CAP: 300 - MAXINVEST: 0 - MININVEST: 0 - MINRET: 0 - diff --git a/install.sh b/install.sh index 2691e21b..0d097c02 100755 --- a/install.sh +++ b/install.sh @@ -30,13 +30,11 @@ JULIA_MAKE_FILE="env/make_julia_environment.jl" JULIA_URL="https://julialang-s3.julialang.org/bin/linux/x64/1.8/julia-1.8.2-linux-x86_64.tar.gz" # Check for command-line arguments -# -a: pre-specify the ALEAF_DIR absolute path as a command-line argument # -n: ignore conda for package management (1=force ignore conda) # -f: auto-agree to any yes/no user prompts while getopts a:nf flag do case "${flag}" in - a) aleaf_dir=${OPTARG};; n) no_conda=1;; f) force=1;; esac @@ -52,18 +50,6 @@ echo "\$ABCE_DIR will be set to $abce_dir" export ABCE_DIR=$abce_dir export ABCE_ENV="$ABCE_DIR/env" -# Set up ALEAF_DIR -# If specified by a command-line argument, don't prompt the user for the -# ALEAF_DIR location. The empty string "" is a valid command-line value -# for this directory. -if [[ ! -n ${aleaf_dir+x} ]]; then - # If not specified in the command line, request a value from the user - echo "Please enter the absolute path to the top level of the ALEAF source directory." - echo "If you do not have ALEAF installed, press Enter to leave this variable empty." - read aleaf_dir -fi -echo "\$ALEAF_DIR will be set to $aleaf_dir" - ################################################################# # Set up the environment @@ -201,7 +187,6 @@ echo "# ABCE configuration" >> "${RC_FILE}" echo "# Delete this block to remove undesired side effects (e.g. Julia version update)" >> "${RC_FILE}" echo "#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - " >> "${RC_FILE}" echo "export ABCE_DIR=$abce_dir" >> "${RC_FILE}" -echo "export ALEAF_DIR=$aleaf_dir" >> "${RC_FILE}" echo "export ABCE_ENV=$abce_dir/env" >> "${RC_FILE}" # Ensure that julia-1.8.2 is added to the $PATH such that the `julia` command @@ -234,7 +219,7 @@ if [[ -z $( which cplex ) ]]; then echo "Either CPLEX is not installed, or you haven't added the location of the 'cplex' binary to the path." echo "If you can't install CPLEX, be sure to change the 'solver' setting in settings.yml to a different solver (recommended alternative: 'HiGHS')." elif [[ -z $( echo $( which cplex) | grep -E "201" ) ]]; then - echo "ABCE and A-LEAF require CPLEX 20.1, but it appears that you have a different version installed." + echo "ABCE requires CPLEX 20.1, but it appears that you have a different version installed." echo "If you can't install CPLEX 20.1, be sure to change the 'solver' setting in settings.yml to a different solver (recommended alternative: 'HiGHS')." else echo "CPLEX 20.1 found on the path." diff --git a/run.py b/run.py index bb7e0a5b..a07a3063 100644 --- a/run.py +++ b/run.py @@ -43,19 +43,6 @@ def set_up_local_paths(settings): # Set the path for ABCE files to the directory where run.py is saved settings["file_paths"]["ABCE_abs_path"] = Path(__file__).parent - if settings["simulation"]["annual_dispatch_engine"] == "ALEAF": - # Try to locate an environment variable to specify where A-LEAF is located - try: - settings["ALEAF"]["ALEAF_abs_path"] = Path(os.environ["ALEAF_DIR"]) - except KeyError: - msg = ( - "The environment variable ALEAF_abs_path does not appear " - + "to be set. Please make sure it points to the correct " - + "directory." - ) - logging.error(msg) - raise - return settings diff --git a/settings.yml b/settings.yml index 9582892a..b0be5c7c 100644 --- a/settings.yml +++ b/settings.yml @@ -4,7 +4,6 @@ simulation: num_steps: 1 file_logging_level: 0 # 0: no additional csvs saved; 1: all additional csvs saved # caution! enabling file logging with 365 repdays can require 200GB+ of storage - annual_dispatch_engine: ABCE C2N_assumption: baseline scenario: @@ -145,12 +144,3 @@ financing: depreciation_horizon: 20 starting_instrument_id: 1000 -# Filenames and settings for ALEAF -ALEAF: - ALEAF_master_settings_file: "ALEAF_Master.xlsx" - ALEAF_model_type: "LC_GEP" - ALEAF_region: "ERCOT" - ALEAF_model_settings_file: "ALEAF_Master_LC_GEP.xlsx" - ALEAF_portfolio_file: "ALEAF_ERCOT.xlsx" - ALEAF_data_file: "ALEAF_settings.yml" - diff --git a/src/ABCEfunctions.jl b/src/ABCEfunctions.jl index 232050b0..c5ae8efc 100644 --- a/src/ABCEfunctions.jl +++ b/src/ABCEfunctions.jl @@ -99,22 +99,8 @@ end function set_up_local_paths(settings, abce_abs_path) settings["file_paths"]["ABCE_abs_path"] = abce_abs_path - if settings["simulation"]["annual_dispatch_engine"] == "ALEAF" - try - settings["file_paths"]["ALEAF_abs_path"] = ENV["ALEAF_DIR"] - catch LoadError - @error string( - "The environment variable ALEAF_abs_path does not ", - "appear to be set. Please make sure it points to ", - "the correct directory.", - ) - end - else - settings["file_paths"]["ALEAF_abs_path"] = "NULL_PATH" - end return settings - end diff --git a/src/ABCEfunctions.py b/src/ABCEfunctions.py index 13cef743..9a055c8e 100644 --- a/src/ABCEfunctions.py +++ b/src/ABCEfunctions.py @@ -17,11 +17,7 @@ import sqlite3 import os -import sys -import numpy as np import pandas as pd -import matplotlib.pyplot as plt -import glob from . import scenario_reduction as sr @@ -154,327 +150,3 @@ def update_DB_table_inplace(db, cur, table, new_data, where): # Execute the constructed command cur.execute(update_cmd) - -def process_outputs(settings, output_dir, unit_specs): - """ - A handler function for postprocessing A-LEAF results stored from the - individual time-steps of an ABCE simulation run. - - Postprocessing tasks: - - "Capacity expansion" (i.e. portfolio composition) (in # of units - and MW installed) - - System-wide results, e.g. total system costs and weighted-average - prices - - Generation unit type results: amount of generation and AS, and - profitability by unit type - - Price data: sorted by time-stamp and from high to low - """ - - # Postprocessing settings - scenario_name = settings["simulation"]["scenario_name"] - file_types = [ - "dispatch_summary_OP", - "expansion_result", - "system_summary_OP", - "system_tech_summary_OP", - ] - - # Get a list of each type of file from the ABCE A-LEAF outputs dir - # This will allow postprocessing to work even if some files are missing - file_lists = {} - for ftype in file_types: - file_lists[ftype] = glob.glob(os.path.join(output_dir, f"*{ftype}*")) - - # Postprocess the expansion results - expansion_results = process_expansion_results( - file_lists["expansion_result"], output_dir, scenario_name, unit_specs - ) - - # Postprocess the system-level results - system_summary_results = process_system_summary( - file_lists["system_summary_OP"], output_dir, scenario_name - ) - - # Postprocess the generation unit type results - system_tech_results = process_tech_summary( - file_lists["system_tech_summary_OP"], - output_dir, - scenario_name, - unit_specs, - ) - - # Postprocess the electricity price data - unsorted_lmp_data, sorted_lmp_data = process_dispatch_data( - file_lists["dispatch_summary_OP"], output_dir, scenario_name - ) - - # Write results to xlsx - writer = pd.ExcelWriter( - Path( - self.settings["file_paths"]["ABCE_abs_path"] - / "outputs" - / self.settings["simulation"]["scenario_name"] - / "abce_ppx_outputs.xlsx" - ) - ) - expansion_results.to_excel(writer, sheet_name="exp_results", index=False) - system_summary_results.to_excel( - writer, sheet_name="sys_summary_results", index=False - ) - system_tech_results["gen"].to_excel( - writer, sheet_name="tech_generation", index=False - ) - system_tech_results["rr"].to_excel( - writer, sheet_name="tech_reg", index=False - ) - system_tech_results["sr"].to_excel( - writer, sheet_name="tech_spin", index=False - ) - system_tech_results["nsr"].to_excel( - writer, sheet_name="tech_nonspin", index=False - ) - system_tech_results["rev"].to_excel( - writer, sheet_name="tech_revenue", index=False - ) - system_tech_results["prof"].to_excel( - writer, sheet_name="tech_profit", index=False - ) - unsorted_lmp_data.to_excel(writer, sheet_name="unsorted_LMP", index=False) - sorted_lmp_data.to_excel(writer, sheet_name="sorted_LMP", index=False) - writer.save() - - # Plot PDCs - plot_pdcs(sorted_lmp_data) - - -def process_expansion_results( - exp_file_list, output_dir, scenario_name, unit_specs -): - """ - Process the "capacity expansion" results output by A-LEAF. For ABCE, - A-LEAF should never be able to actually build or retire new units, so - this provides both an easy reminder of the portfolio composition but - also a diagnostic in case of an incorrect setting that allows A-LEAF - to build or retire units. - """ - num_files = len(exp_file_list) - for i in range(num_files): - file_name = os.path.join( - output_dir, f"{scenario_name}__expansion_result__step_{i}.csv" - ) - df = pd.read_csv(file_name) - - # Use the first file as a seed - if i == 0: - exp_df = df - exp_df = exp_df.rename(columns={"u_i": "step_0"}) - # Fill in only the # of units column from subsequent files - else: - exp_df[f"step_{i}"] = df["u_i"] - - # Delete unneeded columns and give unit id more helpful names - exp_df = exp_df.drop(["u_new_i", "u_ret_i"], axis=1) - - # Warning: this currently assumes the units in unit_specs are listed in the - # same order as in the A-LEAF outputs - # unit_specs should inherit its ordering from the master A-LEAF unit specs - # file, but this may be fragile. - exp_df["unit_id"] = unit_specs["unit_type"] - exp_df = exp_df.rename(columns={"unit_id": "unit_type"}) - - return exp_df - - -def process_system_summary(ss_file_list, output_dir, scenario_name): - """ - Process and collect all system-summary data outputs from A-LEAF. - - The average LMP and RMP data in the A-LEAF output files are computed by - straight average, whereas I need a weighted average. This function also - loads in the dispatch data to quickly compute the correct weighted- - average prices for each time step. - """ - num_files = len(ss_file_list) - for i in range(num_files): - file_name = os.path.join( - output_dir, f"{scenario_name}__system_summary_OP__step_{i}.csv" - ) - df = pd.read_csv(file_name) - # For this workflow, df should only have one line. Alert the user if - # otherwise - if len(df) != 1: - logging.warn( - "It looks like you've run multiple ALEAF scenarios ", - "in the same run. Postprocessing outputs may be ", - "incorrectly translated.", - ) - - # Give df a more helpful first-column key - df = df.rename(columns={"Scenario": "step"}) - df.loc[0, "step"] = i - - # Correctly calculate weighted average electricity and AS prices, and - # add them to the appropriate columns - ds_file_name = os.path.join( - output_dir, f"{scenario_name}__dispatch_summary_OP__step_{i}.csv" - ) - dsdf = pd.read_csv(ds_file_name) - # Set up weighted price columns - dsdf["wtd_LMP"] = dsdf["g_idht"] * dsdf["LMP_dht"] - dsdf["wtd_RMP_R"] = dsdf["r_r_idht"] * dsdf["RMP_R_dht"] - dsdf["wtd_RMP_S"] = dsdf["r_s_idht"] * dsdf["RMP_S_dht"] - dsdf["wtd_RMP_NS"] = dsdf["r_ns_idht"] * dsdf["RMP_NS_dht"] - # Compute weighted average prices - elec_wap = dsdf["wtd_LMP"].sum() / dsdf["g_idht"].sum() - r_wap = dsdf["wtd_RMP_R"].sum() / dsdf["r_r_idht"].sum() - s_wap = dsdf["wtd_RMP_S"].sum() / dsdf["r_s_idht"].sum() - ns_wap = dsdf["wtd_RMP_NS"].sum() / dsdf["r_ns_idht"].sum() - # Insert corrected values into df - df.loc[0, "LMP"] = elec_wap - df.loc[0, "RMP_R"] = r_wap - df.loc[0, "RMP_S"] = s_wap - df.loc[0, "RMP_NS"] = ns_wap - - # If this is step 0, use the dataframe as a seed - if i == 0: - ss_df = df - # Append each file's data to the main df - else: - ss_df = pd.concat([ss_df, df]) - - return ss_df - - -def process_tech_summary(sts_file_list, output_dir, scenario_name, unit_specs): - """ - Process all sets of A-LEAF unit-type summary statistics. Cleans up unit - type names and then organizes results into a unified dataframe. - """ - num_files = len(sts_file_list) - for i in range(num_files): - # Read in the system technology summary for time-step i - file_name = os.path.join( - output_dir, f"{scenario_name}__system_tech_summary_OP__step_{i}.csv" - ) - df = pd.read_csv(file_name) - - # Clean up unit type representation - df = df.rename(columns={"UnitGroup": "unit_type"}) - - # Replace default numbers with unit names - # Same fragility warning as process_expansion_results() - df["unit_type"] = unit_specs["unit_type"] - - # If this is step 0, use the file to set up DataFrames for all tracked - # items - if i == 0: - gen_df = ( - df[["unit_type", "Generation"]] - .copy() - .rename(columns={"Generation": "step_0"}) - ) - rr_df = ( - df[["unit_type", "Reserve_Reg"]] - .copy() - .rename(columns={"Reserve_Reg": "step_0"}) - ) - sr_df = ( - df[["unit_type", "Reserve_Spin"]] - .copy() - .rename(columns={"Reserve_Spin": "step_0"}) - ) - nsr_df = ( - df[["unit_type", "Reserve_Non"]] - .copy() - .rename(columns={"Reserve_Non": "step_0"}) - ) - rev_df = ( - df[["unit_type", "UnitRevenue"]] - .copy() - .rename(columns={"UnitRevenue": "step_0"}) - ) - prof_df = ( - df[["unit_type", "UnitProfit"]] - .copy() - .rename(columns={"UnitProfit": "step_0"}) - ) - else: - gen_df[f"step_{i}"] = df["Generation"].copy() - rr_df[f"step_{i}"] = df["Reserve_Reg"].copy() - sr_df[f"step_{i}"] = df["Reserve_Spin"].copy() - nsr_df[f"step_{i}"] = df["Reserve_Non"].copy() - rev_df[f"step_{i}"] = df["UnitRevenue"].copy() - prof_df[f"step_{i}"] = df["UnitProfit"].copy() - - results = { - "gen": gen_df, - "rr": rr_df, - "sr": sr_df, - "nsr": nsr_df, - "rev": rev_df, - "prof": prof_df, - } - return results - - -def process_dispatch_data(ds_file_list, output_dir, scenario_name): - """ - Process all sets of A-LEAF dispatch output data, in order to create - dataframes of electricity price (LMP) data. Function filters the raw data - to remove redundant LMP entries, and produces time-sorted and - high-to-low sorted dataframes of results for all time steps. - - Returns: - - unsorted_LMP_data: LMP data for each time step, sorted by timestamp - - sorted_LMP_data: LMP data for each time step, sorted high to low - (to create price duration curves) - """ - - num_files = len(ds_file_list) - for i in range(num_files): - # Read in the ALEAF dispatch output data for time-step i - file_name = os.path.join( - output_dir, f"{scenario_name}__dispatch_summary_OP__step_{i}.csv" - ) - df = pd.read_csv(file_name) - - # Filter out duplicate price entries: the dispatch data df has one - # copy of each time period's price data for each unit type. I'm - # filtering for unique time periods by only selecting data for the - # "WIND" generator (though any generator type would work the same). - df = df[df["UnitGroup"] == "WIND"].reset_index() - - # Create dataframes to hold unsorted and sorted LMP data separately - unsorted_lmp = pd.DataFrame(df["LMP_dht"].copy()) - sorted_lmp = unsorted_lmp.copy().sort_values( - by="LMP_dht", ascending=False, ignore_index=True - ) - - # Name the new column in each df appropriately - if i == 0: - unsorted_lmp_data = unsorted_lmp.rename( - columns={"LMP_dht": "step_0"} - ) - sorted_lmp_data = sorted_lmp.rename(columns={"LMP_dht": "step_0"}) - else: - unsorted_lmp_data[f"step_{i}"] = unsorted_lmp - sorted_lmp_data[f"step_{i}"] = sorted_lmp - - return unsorted_lmp_data, sorted_lmp_data - - -def plot_pdcs(sorted_lmp_data): - """ - A function to plot price duration curves. Currently only plots - step 0 results. - """ - # Create log-log plots of the period-0 A-LEAF dispatch results - x = np.arange(len(sorted_lmp_data)) + 1 - fig, ax = plt.subplots() - ax.set_title("Log-log plot of period-0 price duration curve") - ax.set_xlabel("log(period of the year)") - ax.set_ylabel("log(price)") - ax.set_xscale("log") - ax.set_yscale("log") - ax.plot(x, sorted_lmp_data["step_0"]) - fig.savefig("./abce_pd0_pdc.png") diff --git a/src/input_data_management.py b/src/input_data_management.py index 5bce34b1..407be55a 100644 --- a/src/input_data_management.py +++ b/src/input_data_management.py @@ -14,12 +14,8 @@ # limitations under the License. ########################################################################## - -import os import pandas as pd -import numpy as np import yaml -import openpyxl from pathlib import Path @@ -32,536 +28,6 @@ def load_data(file_name): return file_contents -def process_system_portfolio(db, current_pd): - # Retrieve the list of currently-operational assets by type - system_portfolio = pd.read_sql_query( - f"SELECT unit_type, COUNT(unit_type) FROM assets " - + f"WHERE completion_pd <= {current_pd} AND " - + f"retirement_pd > {current_pd} " - + f"GROUP BY unit_type", - db, - ) - - # Rename the aggregation column to match the standard - system_portfolio = system_portfolio.rename( - columns={"COUNT(unit_type)": "num_units"} - ) - - return system_portfolio - - -def update_unit_specs_for_ALEAF(unit_specs_data, system_portfolio): - # Separate out ATB_search_settings - for unit_type, unit_type_data in unit_specs_data.items(): - if "ATB_search_settings" in unit_type_data.keys(): - for ATB_key, ATB_value in unit_type_data[ - "ATB_search_settings" - ].items(): - unit_type_data[f"ATB_{ATB_key}"] = ATB_value - - # Convert unit_specs data from dict to DataFrame - unit_specs = ( - pd.DataFrame.from_dict(unit_specs_data, orient="index") - .reset_index() - .rename(columns={"index": "unit_type"}) - ) - - # Sort unit_specs alphabetically to allow definitive TechX id ordering - unit_specs = unit_specs.sort_values("unit_type") - - # Create the UNIT_x columns - unit_specs["UNITGROUP"] = unit_specs["unit_type"] - unit_specs["UNIT_CATEGORY"] = unit_specs["unit_type"] - - # Create constant-value columns - unit_specs["FOR"] = 0 - unit_specs["FCR"] = 0.05 - unit_specs["Charge_CAP"] = 0 - unit_specs["STOCAP"] = 0 - unit_specs["STOMIN"] = 0 - unit_specs["INVEST_FLAG"] = "FALSE" - unit_specs["RET_FLAG"] = "FALSE" - unit_specs["BATEFF"] = 0 - unit_specs["Integrality"] = "TRUE" - unit_specs["Outages"] = "FALSE" - unit_specs["bus_i"] = 1 - unit_specs["GenCo ID"] = 1 - unit_specs["MAXINVEST"] = 0 - unit_specs["MININVEST"] = 0 - unit_specs["MINRET"] = 0 - unit_specs["ATB_Setting_ID"] = "ATB_ID_1" - - # Create the computed Emission column - unit_specs["Emission"] = ( - unit_specs["emissions_per_MMBTU"] * unit_specs["heat_rate"] / 1000 - ) - - # Take care of the row-by-row operations - unit_specs["Tech_ID"] = "" - unit_specs["Commitment"] = "" - unit_specs["GEN UID"] = "" - tech_id = 1 - unit_specs = unit_specs.set_index("unit_type") - for unit_type in unit_specs.itertuples(): - # Set Commitment to the inverse of is_VRE - unit_specs.loc[unit_type.Index, "Commitment"] = not unit_type.is_VRE - - # Set the Tech_ID value in alphabetical order - unit_specs.loc[unit_type.Index, "Tech_ID"] = f"Tech{tech_id}" - tech_id += 1 - - # Set the GEN UID field - unit_specs.loc[unit_type.Index, "GEN UID"] = f"{unit_type.Index}_1" - - # Pivot in num_units data from the system portfolio for convenience - unit_specs = unit_specs.reset_index().rename(columns={"index": "unit_type"}) - unit_specs = unit_specs.merge(system_portfolio, how="outer", on="unit_type") - - # Fill any NaNs with zeros - unit_specs = unit_specs.fillna(0) - - return unit_specs - - -def create_ALEAF_unit_dataframes(unit_specs): - gen_technology_cols = { - "Tech_ID": "Tech_ID", - "UNITGROUP": "UNITGROUP", - "UNIT_CATEGORY": "UNIT_CATEGORY", - "unit_type": "UNIT_TYPE", - "fuel_type": "FUEL", - "capacity": "CAP", - "max_PL": "PMAX", - "min_PL": "PMIN", - "FOR": "FOR", - "overnight_capital_cost": "CAPEX", - "FCR": "FCR", - "retirement_cost": "RETC", - "FOM": "FOM", - "VOM": "VOM", - "FC_per_MMBTU": "FC", - "no_load_cost": "NLC", - "start_up_cost": "SUC", - "shut_down_cost": "SDC", - "heat_rate": "HR", - "ramp_up_limit": "RUL", - "ramp_down_limit": "RDL", - "max_regulation": "MAXR", - "max_spinning_reserve": "MAXSR", - "max_nonspinning_reserve": "MAXNSR", - "capacity_factor": "CAPCRED", - "emissions_per_MMBTU": "EMSFAC", - "Charge_CAP": "Charge_CAP", - "STOCAP": "STOCAP", - "STOMIN": "STOMIN", - "INVEST_FLAG": "INVEST_FLAG", - "RET_FLAG": "RET_FLAG", - "is_VRE": "VRE_Flag", - "BATEFF": "BATEFF", - "Commitment": "Commitment", - "Integrality": "Integrality", - "Emission": "Emission", - "Outages": "Outages", - } - - gen_technology = ( - unit_specs[list(gen_technology_cols.keys())] - .copy() - .rename(columns=gen_technology_cols) - ) - - gen_cols = { - "GEN UID": "GEN UID", - "bus_i": "bus_i", - "Tech_ID": "Tech_ID", - "UNITGROUP": "UNITGROUP", - "UNIT_CATEGORY": "UNIT_CATEGORY", - "unit_type": "UNIT_TYPE", - "fuel_type": "FUEL", - "num_units": "EXUNITS", - "capacity": "CAP", - "MAXINVEST": "MAXINVEST", - "MININVEST": "MININVEST", - "MINRET": "MINRET", - } - - gen = unit_specs[list(gen_cols.keys())].copy().rename(columns=gen_cols) - - ATB_settings_cols = { - "ATB_Setting_ID": "ATB_Setting_ID", - "Tech_ID": "Tech_ID", - "UNITGROUP": "UNITGROUP", - "UNIT_CATEGORY": "UNIT_CATEGORY", - "unit_type": "UNIT_TYPE", - "fuel_type": "FUEL", - "ATB_Tech": "Tech", - "ATB_TechDetail": "TechDetail", - "ATB_Case": "Case", - "ATB_CRP": "CRP", - "ATB_Scenario": "Scenario", - "ATB_Year": "Year", - "ATB_ATB_year": "ATB Year", - } - - ATB_settings = ( - unit_specs[list(ATB_settings_cols.keys())] - .copy() - .rename(columns=ATB_settings_cols) - ) - - return gen_technology, gen, ATB_settings - - -def create_ALEAF_Master_file(ALEAF_data, settings): - # Dictionary of all tabs and their metadata - # In final implementation, will be replaced by the ALEAF standard - # data schema - tabs_to_create = { - "ALEAF Master Setup": { - "ABCE_tab_name": "ALEAF_Master_setup", - "data": None, - }, - "CPLEX Setting": {"ABCE_tab_name": "CPLEX_settings", "data": None}, - "GLPK Setting": {"ABCE_tab_name": "GLPK_settings", "data": None}, - "CBC Setting": {"ABCE_tab_name": "CBC_settings", "data": None}, - "Gurobi Setting": {"ABCE_tab_name": "Gurobi_settings", "data": None}, - "HiGHS Setting": {"ABCE_tab_name": "HiGHS_settings", "data": None}, - } - - # Pull the ALEAF settings data into dictionaries - for ALEAF_tab_name, tab_data in tabs_to_create.items(): - tabs_to_create[ALEAF_tab_name]["data"] = ALEAF_data["ALEAF_Master"][ - tab_data["ABCE_tab_name"] - ] - - # Pull the appropriate solver name from the settings file - tabs_to_create["ALEAF Master Setup"]["solver_name"] = settings[ - "simulation" - ]["solver"] - - # Finalize the Setting tab data - for solver_tab, tab_data in tabs_to_create.items(): - if solver_tab != "ALEAF Master Setup": - # Set up metadata about solver settings - invalid_items = [ - "solver_setting_list", - "num_solver_setting", - "solver_direct_mode_flag", - ] - solver_setting_list = ", ".join( - [ - parameter - for parameter in tabs_to_create[solver_tab]["data"].keys() - if parameter not in invalid_items - ] - ) - - # Set up solver_direct_mode_flag: TRUE if CPLEX, FALSE otherwise - mode_flag = "false" - if "CPLEX" in solver_tab: - mode_flag = "true" - - # Create the dictionary of extra rows for all solver tabs - solver_extra_items = { - "solver_direct_mode_flag": mode_flag, - "num_solver_setting": len(tabs_to_create[solver_tab]["data"]), - "solver_setting_list": solver_setting_list, - } - - for key, value in solver_extra_items.items(): - if key not in tab_data["data"].keys(): - tab_data["data"].update({key: value}) - - # Construct the path to which this file should be written - output_path = ( - Path(os.environ["ALEAF_DIR"]) - / "setting" - / settings["ALEAF"]["ALEAF_master_settings_file"] - ) - - # Write this file to the destination - write_workbook_and_close("ALEAF_Master", tabs_to_create, output_path) - - -def create_ALEAF_Master_LC_GEP_file( - ALEAF_data, gen_technology, ATB_settings, settings -): - tabs_to_create = { - "LC_GEP Setting": {"ABCE_tab_name": "LC_GEP_settings", "data": None}, - "Planning Design": {"ABCE_tab_name": "planning_design", "data": None}, - "Simulation Setting": { - "ABCE_tab_name": "simulation_settings", - "data": None, - }, - "Simulation Configuration": { - "ABCE_tab_name": "scenario_settings", - "data": None, - "orient": "horizontal", - }, - "Gen Technology": { - "ABCE_tab_name": "unit_specs", - "data": gen_technology, - }, - "File Path": { - "ABCE_tab_name": "ALEAF_relative_file_paths", - "data": None, - }, - "Scenario Reduction Setting": { - "ABCE_tab_name": "scenario_reduction_settings", - "data": None, - }, - "ATB Setting": { - "ABCE_tab_name": "ATB_search_settings", - "data": ATB_settings, - }, - } - - for ALEAF_tab_name, tab_data in tabs_to_create.items(): - if not isinstance(tab_data["data"], pd.DataFrame): - tabs_to_create[ALEAF_tab_name]["data"] = ALEAF_data[ - "ALEAF_Master_LC_GEP" - ][tab_data["ABCE_tab_name"]] - - # Finalize tab data - # Finalize "Planning Design" tab - # Add extra items - pd_data = tabs_to_create["Planning Design"]["data"] - pd_extra_items = { - "targetyear_value": ( - pd_data["final_year_value"] - pd_data["current_year_value"] - ), - "load_increase_rate_value": 1, - "num_simulation_per_stage_value": ( - (pd_data["final_year_value"] - pd_data["current_year_value"]) - / pd_data["numstages_value"] - ), - } - tabs_to_create["Planning Design"]["data"].update(pd_extra_items) - - # Rename items - items_to_rename = { - "planning_reserve_margin": "planning_reserve_margin_value" - } - for key, value in items_to_rename.items(): - if value not in tabs_to_create["Planning Design"]["data"].keys(): - tabs_to_create["Planning Design"]["data"][value] = tabs_to_create[ - "Planning Design" - ]["data"].pop(key) - - # Finalize "Simulation Setting" tab - # Add extra items - ss_extra_items = { - "test_system_file_name": f"ALEAF_{tabs_to_create['Simulation Setting']['data']['test_system_name']}" - } - tabs_to_create["Simulation Setting"]["data"].update(ss_extra_items) - - # Rename items - items_to_rename = { - "capex_projection_flag": "capax_projection_flag", - "network_reduction_flag": "Network_reduction_flag", - } - for key, value in items_to_rename.items(): - if value not in tabs_to_create["Simulation Setting"]["data"].keys(): - tabs_to_create["Simulation Setting"]["data"][ - value - ] = tabs_to_create["Simulation Setting"]["data"].pop(key) - - # Finalize "Simulation Configuration" tab - # Update existing data items - # Update peak demand - tabs_to_create["Simulation Configuration"]["data"][ - "peak_demand" - ] = settings["scenario"]["peak_demand"] - - # Update policies - if "policies" in settings["scenario"].keys(): - for policy, policy_data in settings["scenario"]["policies"].items(): - if policy_data["enabled"]: - qty = policy_data["qty"] - if ( - "PTC" in policy - and "unit_type" in policy_data["eligibility"].keys() - ): - eligible_types = policy_data["eligibility"]["unit_type"] - if any("wind" in unit_type for unit_type in eligible_types): - tabs_to_create["Simulation Configuration"]["data"][ - "wind_PTC" - ] = qty - if any( - "solar" in unit_type for unit_type in eligible_types - ): - tabs_to_create["Simulation Configuration"]["data"][ - "solar_PTC" - ] = qty - if any( - "nuclear" in unit_type for unit_type in eligible_types - ): - tabs_to_create["Simulation Configuration"]["data"][ - "nuclear_PTC" - ] = qty - if ( - "ITC" in policy - and "unit_type" in policy_data["eligibility"].keys() - ): - eligible_types = policy_data["eligibility"]["unit_type"] - if any("wind" in unit_type for unit_type in eligible_types): - tabs_to_create["Simulation Configuration"]["data"][ - "wind_ITC" - ] = qty - if any( - "solar" in unit_type for unit_type in eligible_types - ): - tabs_to_create["Simulation Configuration"]["data"][ - "solar_ITC" - ] = qty - if any( - "nuclear" in unit_type for unit_type in eligible_types - ): - tabs_to_create["Simulation Configuration"]["data"][ - "nuclear_ITC" - ] = qty - if "CTAX" in policy: - tabs_to_create["Simulation Configuration"]["data"][ - "carbon_tax" - ] = qty - - # Add extra items - sc_extra_items = { - "planning_reserve_margin_value": tabs_to_create["Planning Design"][ - "data" - ]["planning_reserve_margin_value"], - "load_increase_rate_value": tabs_to_create["Planning Design"]["data"][ - "load_increase_rate_value" - ], - } - tabs_to_create["Simulation Configuration"]["data"].update(sc_extra_items) - - # Rename items - items_to_rename = { - "scenario_name": "Scenario", - "peak_demand": "PD", - "carbon_tax": "CTAX", - "RPS_percentage": "RPS", - "wind_PTC": "PTC_W", - "solar_PTC": "PTC_S", - "nuclear_PTC": "PTC_N", - "wind_ITC": "ITC_W", - "solar_ITC": "ITC_S", - "nuclear_ITC": "ITC_N", - } - for key, value in items_to_rename.items(): - if ( - value - not in tabs_to_create["Simulation Configuration"]["data"].keys() - ): - tabs_to_create["Simulation Configuration"]["data"][ - value - ] = tabs_to_create["Simulation Configuration"]["data"].pop(key) - - # Finalize "Scenario Reduction Setting" tab - srs_data = tabs_to_create["Scenario Reduction Setting"]["data"] - data_sets = [ - srs_data["input_type_load_shape_flag"], - srs_data["input_type_load_MWh_flag"], - srs_data["input_type_wind_shape_flag"], - srs_data["input_type_wind_MWh_flag"], - srs_data["input_type_solar_shape_flag"], - srs_data["input_type_solar_MWh_flag"], - srs_data["input_type_net_load_MWh_flag"], - ] - num_data_sets = sum(1 for item in data_sets if item == "TRUE") - srs_extra_items = {"num_data_set": num_data_sets} - tabs_to_create["Scenario Reduction Setting"]["data"].update(srs_extra_items) - - # Construct the path to which this file should be written - output_path = ( - Path(os.environ["ALEAF_DIR"]) - / "setting" - / settings["ALEAF"]["ALEAF_model_settings_file"] - ) - - # Write this file to the destination - write_workbook_and_close("ALEAF_Master_LC_GEP", tabs_to_create, output_path) - - -def create_ALEAF_portfolio_file(ALEAF_data, gen, settings): - tabs_to_create = { - "case setting": {"ABCE_tab_name": "grid_settings", "data": None}, - "gen": {"ABCE_tab_name": "system_portfolio", "data": gen}, - "bus": {"ABCE_tab_name": "buses", "data": None, "orient": "horizontal"}, - "branch": { - "ABCE_tab_name": "branch", - "data": None, - "orient": "horizontal", - }, - "sub_area": { - "ABCE_tab_name": "sub_area", - "data": None, - "orient": "horizontal", - }, - "sub_area_mapping": { - "ABCE_tab_name": "sub_area_mapping", - "data": None, - "orient": "horizontal", - }, - } - - for ALEAF_tab_name, tab_data in tabs_to_create.items(): - if not isinstance(tab_data["data"], pd.DataFrame): - tabs_to_create[ALEAF_tab_name]["data"] = ALEAF_data[ - "ALEAF_portfolio" - ][tab_data["ABCE_tab_name"]] - - # Construct the path to which this file should be written - output_path = ( - Path(os.environ["ALEAF_DIR"]) - / "data" - / settings["ALEAF"]["ALEAF_model_type"] - / settings["ALEAF"]["ALEAF_region"] - / settings["ALEAF"]["ALEAF_portfolio_file"] - ) - - # Write this file to the destination - write_workbook_and_close("ALEAF_ERCOT", tabs_to_create, output_path) - - -def write_workbook_and_close(base_filename, tabs_to_create, output_file_path): - # Load all tab data into pandas dataframes - for ALEAF_tab_name, tab_data in tabs_to_create.items(): - # If the data is already in a dataframe, no need to convert it - if not isinstance(tab_data["data"], pd.DataFrame): - orient = "index" - if "orient" in tab_data.keys(): - if tab_data["orient"] == "horizontal": - df = pd.DataFrame.from_dict([tab_data["data"]]) - elif tab_data["orient"] == "multiline_horizontal": - df = pd.DataFrame.from_dict( - tab_data["data"], orient="index" - ) - else: - df = ( - pd.DataFrame.from_dict( - tab_data["data"], orient="index", columns=["Value"] - ) - .reset_index() - .rename(columns={"index": "Setting"}) - ) - - tabs_to_create[ALEAF_tab_name]["data"] = df - - # Create an ExcelWriter object to contain all tabs and save file - writer_object = pd.ExcelWriter(output_file_path, engine="openpyxl") - - # Write all tabs to file - for ALEAF_tab_name, tab_data in tabs_to_create.items(): - tab_data["data"].to_excel( - writer_object, sheet_name=ALEAF_tab_name, index=False - ) - - # Write all changes and close the file writer - writer_object.close() - - def set_unit_type_policy_adjustment(unit_type, unit_type_data, settings): # Initialize all units with zero policy adjustment carbon_tax_per_MWh = 0 @@ -614,7 +80,6 @@ def set_unit_type_policy_adjustment(unit_type, unit_type_data, settings): return carbon_tax_per_MWh, tax_credits_per_MWh, tax_credits_per_MW, - def compute_unit_specs_cols(unit_specs, settings): for unit_type, unit_type_data in unit_specs.items(): # Ensure all units have fuel cost elements @@ -651,36 +116,3 @@ def initialize_unit_specs(settings, args): return unit_specs - -def update_ALEAF_data(ALEAF_data, settings): - # Update ALEAF_data with settings data - ALEAF_data["ALEAF_Master_LC_GEP"]["scenario_settings"][ - "scenario_name" - ] = settings["simulation"]["scenario_name"] - - return ALEAF_data - - -def create_ALEAF_files(settings, ALEAF_data, unit_specs_data, db, current_pd): - # Process the system portfolio - system_portfolio = process_system_portfolio(db, current_pd) - - # Process the dictionary unit_specs into the A-LEAF-style format - unit_specs = update_unit_specs_for_ALEAF(unit_specs_data, system_portfolio) - - # Process the unit_specs data into A-LEAF-ready dataframes - gen_technology, gen, ATB_settings = create_ALEAF_unit_dataframes(unit_specs) - - # Update ALEAF data - ALEAF_data = update_ALEAF_data(ALEAF_data, settings) - - # Create the ALEAF_Master.xlsx file - create_ALEAF_Master_file(ALEAF_data, settings) - - # Create the ALEAF_Master_LC_GEP.xlsx file - create_ALEAF_Master_LC_GEP_file( - ALEAF_data, gen_technology, ATB_settings, settings - ) - - # Create the ALEAF_portfolio.xlsx file - create_ALEAF_portfolio_file(ALEAF_data, gen, settings) diff --git a/src/model.py b/src/model.py index e684c5d5..a90d2ae7 100644 --- a/src/model.py +++ b/src/model.py @@ -61,10 +61,6 @@ def __init__(self, settings, args): if self.args.verbosity > 2: self.ensure_tmp_dir_exists() - # If running A-LEAF, set up any necessary file paths - if self.settings["simulation"]["annual_dispatch_engine"] == "ALEAF": - self.set_ALEAF_file_paths() - # Define the agent schedule, using randomly-ordered agent activation self.schedule = RandomActivation(self) @@ -214,19 +210,6 @@ def show_abce_header(self): for line in logo.read().splitlines(): logging.log(self.settings["constants"]["vis_lvl"], line) - def set_ALEAF_file_paths(self): - """Set up all absolute paths to ALEAF and its input files, and - save them as member data. - """ - # Set path to ALEAF outputs - self.ALEAF_output_data_path = Path( - Path(os.environ["ALEAF_DIR"]) - / "output" - / self.settings["ALEAF"]["ALEAF_model_type"] - / self.settings["ALEAF"]["ALEAF_region"] - / f"scenario_1_{self.settings['simulation']['scenario_name']}" - ) - def add_unit_specs_to_db(self): """ This function selects only the unit_specs columns needed for the @@ -612,65 +595,24 @@ def step(self, demo=False): logging.log(self.settings["constants"]["vis_lvl"], "\n") user_response = input("Press Enter to continue: ") - dispatch_engine = self.settings["simulation"]["annual_dispatch_engine"] - - if dispatch_engine in ["ALEAF", "aleaf", "A-LEAF", "a-leaf"]: - # Re-load the baseline A-LEAF data - ALEAF_data = idm.load_data( - Path(self.args.inputs_path) - / self.settings["ALEAF"]["ALEAF_data_file"] - ) - - # Generate all three A-LEAF input files and save them to the - # appropriate subdirectories in the A-LEAF top-level directory - idm.create_ALEAF_files( - self.settings, - ALEAF_data, - self.unit_specs, - self.db, - self.current_pd, - ) - - # Set up the command to execute ALEAF - logging.log( - self.settings["constants"]["vis_lvl"], "Running A-LEAF..." - ) - run_script_path = Path(os.environ["ALEAF_DIR"]) / "execute_ALEAF.jl" - ALEAF_env_path = Path(os.environ["ALEAF_DIR"]) / "." - ALEAF_sysimage_path = ( - Path(os.environ["ALEAF_DIR"]) / "aleafSysimage.so" - ) - dispatch_cmd = ( - f"julia --project={ALEAF_env_path} " - + f"-J {ALEAF_sysimage_path} {run_script_path} " - + f"{self.settings['ALEAF']['ALEAF_abs_path']}" - ) - - elif dispatch_engine in ["ABCE", "abce"]: - # Set up the command to run dispatch.jl in annual exact mode - ABCE_ENV = Path(os.environ["ABCE_ENV"]) - annual_disp_script_path = Path(self.settings["file_paths"]["ABCE_abs_path"]) / "src" / "annual_dispatch.jl" + # Set up the command to run dispatch.jl in annual exact mode + ABCE_ENV = Path(os.environ["ABCE_ENV"]) + annual_disp_script_path = Path(self.settings["file_paths"]["ABCE_abs_path"]) / "src" / "annual_dispatch.jl" - dispatch_cmd = ( - f"julia --project={ABCE_ENV} {annual_disp_script_path} " - + f"--ABCE_dir={self.settings['file_paths']['ABCE_abs_path']} " - + f"--current_pd={self.current_pd} " - + f"--settings_file={self.args.settings_file} " - ) + dispatch_cmd = ( + f"julia --project={ABCE_ENV} {annual_disp_script_path} " + + f"--ABCE_dir={self.settings['file_paths']['ABCE_abs_path']} " + + f"--current_pd={self.current_pd} " + + f"--settings_file={self.args.settings_file} " + ) # Run the dispatch simulation - if dispatch_engine not in ["none", "None"]: - if self.args.verbosity < 2: - sp = subprocess.check_call( - dispatch_cmd, shell=True, stdout=open(os.devnull, "wb") - ) - else: - sp = subprocess.check_call(dispatch_cmd, shell=True) - - # Save data to its final destination, if needed - if dispatch_engine in ["ALEAF", "aleaf", "A-LEAF", "a-leaf"]: - self.save_ALEAF_outputs() - self.process_ALEAF_dispatch_results() + if self.args.verbosity < 2: + sp = subprocess.check_call( + dispatch_cmd, shell=True, stdout=open(os.devnull, "wb") + ) + else: + sp = subprocess.check_call(dispatch_cmd, shell=True) def display_step_header(self): @@ -1105,78 +1047,3 @@ def execute_all_status_updates(self): self.db.cursor().execute("DELETE FROM asset_updates") self.db.commit() - - def save_ALEAF_outputs(self): - # Copy all ALEAF output files to the output directory, with - # scenario and step-specific names - files_to_save = [ - "dispatch_summary_OP", - "expansion_result", - "system_summary_OP", - "system_tech_summary_OP", - ] - for outfile in files_to_save: - old_filename = ( - f"{self.settings['simulation']['scenario_name']}" - + f"__{outfile}.csv" - ) - old_filepath = Path(self.ALEAF_output_data_path) / old_filename - new_filename = ( - f"{self.settings['simulation']['scenario_name']}" - + f"__{outfile}__step_{self.current_pd}.csv" - ) - new_filepath = Path(self.primary_output_data_path) / new_filename - shutil.copy2(old_filepath, new_filepath) - - - def process_ALEAF_dispatch_results(self): - # Find and load the ALEAF dispatch file corresponding to the current - # simulation period - fname_pattern = f"dispatch_summary_OP__step_{self.current_pd}" - ALEAF_dsp_file = None - for fname in os.listdir(Path(self.primary_output_data_path)): - if ( - "dispatch" in fname - and "OP" in fname - and f"{self.current_pd}" in fname - ): - ALEAF_dsp_file = Path(self.primary_output_data_path) / fname - - # Get the number of units which are currently operational (needed for - # scaling dispatch results to a per-unit basis) - sql_query = ( - f"SELECT unit_type, COUNT(unit_type) FROM assets " - + f"WHERE completion_pd <= {self.current_pd} " - + f"AND retirement_pd > {self.current_pd} " - + f"AND cancellation_pd > {self.current_pd} " - + f"GROUP BY unit_type" - ) - num_units = pd.read_sql_query( - sql_query, self.db, index_col="unit_type" - ).rename(columns={"COUNT(unit_type)": "num_units"}) - - # Postprocess ALEAF dispatch results - ALEAF_dsp_results = dsp.postprocess_dispatch( - ALEAF_dsp_file, num_units, self.unit_specs - ) - ALEAF_dsp_results["period"] = self.current_pd - ALEAF_dsp_results = ALEAF_dsp_results.reset_index().rename( - columns={"index": "unit_type"} - ) - - # Get list of column names for ordering - cursor = self.db.cursor().execute( - "SELECT * FROM annual_dispatch_unit_summary" - ) - col_names = [description[0] for description in cursor.description] - - # Reorder ALEAF_dsp_results to match database - ALEAF_dsp_results = ALEAF_dsp_results[col_names] - - logging.debug(ALEAF_dsp_results) - - ALEAF_dsp_results.to_sql( - "ALEAF_dispatch_results", self.db, if_exists="append", index=False - ) - - diff --git a/src/postprocessing.py b/src/postprocessing.py index f21cb52b..10d1f882 100644 --- a/src/postprocessing.py +++ b/src/postprocessing.py @@ -15,7 +15,6 @@ ########################################################################## import os -import numpy as np import pandas as pd from pathlib import Path import yaml diff --git a/src/seed_creator.py b/src/seed_creator.py index 19c006a9..c12c59c3 100644 --- a/src/seed_creator.py +++ b/src/seed_creator.py @@ -20,7 +20,6 @@ import os import sys import logging -import pandas as pd # Database Specification: abce_tables = {