diff --git a/UFOManager-2.0.0/.zenodo.json b/UFOManager-2.0.0/.zenodo.json
new file mode 100644
index 0000000000000000000000000000000000000000..a69f320d36d24e6bc800dda05e82cf6352373083
--- /dev/null
+++ b/UFOManager-2.0.0/.zenodo.json
@@ -0,0 +1,20 @@
+{
+    "creators": [
+        {
+            "affiliation": "University of Illinois at Urbana-Champaign",
+            "name": "Wang, Zijun"
+        },
+        {
+            "affiliation": "University of Illinois at Urbana-Champaign",
+            "name": "Roy, Avik"
+        },
+        {
+            "affiliation": "University of Illinois at Urbana-Champaign",
+            "name": "Neubauer, Mark"
+        }
+    ],
+
+    "license": "Apache-2.0",
+
+    "title": "UFOManager: Tools to preserve and download Universal FeynRules Output (UFO) models"
+}
diff --git a/UFOManager-2.0.0/README.md b/UFOManager-2.0.0/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..f294ff63bf608fb261bee56aa98b361add72d153
--- /dev/null
+++ b/UFOManager-2.0.0/README.md
@@ -0,0 +1,316 @@
+[![DOI](https://zenodo.org/badge/516765685.svg)](https://zenodo.org/badge/latestdoi/516765685)
+
+# About `UFO` models
+`UFO` is the abbreviation of **U**niversal **F**eynRules **O**utput. UFO models are used to digitally store detailed information about the Lagrangian of a quantum field theory, such as names, [PDG IDs](https://doi.org/10.1007/BF02683426), and physical properties of elementary particles, relevant parameters (like coupling strengths), and vertices associated with Feynman Diagrams. They are developed as self-sustained Python libraries and can be used by Monte Carlo event generators such as [MadGraph](http://madgraph.phys.ucl.ac.be) to simulate physics processes in a collider experiment. UFO models are widely used in the context of the BSM theories. 
+
+Further details about the content and format of UFO models can be found in the article: [UFO – The Universal FeynRules Output](https://doi.org/10.1016/j.cpc.2012.01.022). Also, you can find examples of different UFO models in this webpage: [https://feynrules.irmp.ucl.ac.be/wiki/ModelDatabaseMainPage](https://feynrules.irmp.ucl.ac.be/wiki/ModelDatabaseMainPage).
+
+# About `FAIR` principles
+`FAIR` stands for **F**indable, **A**ccessible, **I**nteroperable, and **R**eusable. FAIR principles were originally proposed in [this paper](https://doi.org/10.1038/sdata.2016.18) as domain-agnostic guidelines on preservation and management of scientific data. These principles have also been interpreted in the context of other digital objects like research software, machine learning (ML) models, notebooks etc. These guidelines focus on persistent preservation of such contents so that they are well-preserved, easily found, and reused, with additional emphasis on improving the ability of machines to automatically search and use digital contents and aims to help users better access and reuse those existing data. For more information of FAIR principles, you can visit [GO FAIR](https://www.go-fair.org/fair-principles).
+
+Domain specific interpretation of FAIR principles in the context of different kinds of digital objetcs are being investigated by multiple groups. For instance, the [FAIR4HEP](https://fair4hep.github.io/) group focuses on identifying the best practices to make data and ML models FAIR in high energy physics. 
+
+# About this repository
+Like any other digital content, UFO models have software and platform dependencies, require version controlling, and can benefit from a unified way of preserving and distributing these resources. This FAIR-principle guided repository has been developed as a comprehensive tool to automate the persistent preservation and dispersion of UFO models and their corresponding metadata, creating a reliable and persistent bridgeway between the developers and users of such models. The primary content of this repository is the `UFOManager` package that consists of two python scripts: `UFOUpload.py` for uploading UFO models and `UFODownlaod.py` for downloading UFO modelss. Both scripts can be used as standalone `Python` scripts or the `UFOManager` package can be incorporated into user-developed custom facilitaor script.
+
+## Dependencies
+The functions of the `UFOManager/UFOUpload.py` script i.e. to validate UFO models, generate their metadata, and upload them to zenodo with DOIs are supported with both Python 2 and 3. Meanwhile, `UFOManager/UFODownload.py` is only supported with Python 3. The dependencies to successfully use this package are listed in the files `requirements_Python2.txt` and `requirements_Python3.txt`. The specific instructions to setup the necessary environments and install these dependencies are given in the following section.
+
+The following list includes some of the dependencies-
+- [Particle](https://github.com/scikit-hep/particle/) to validate particle [PDG IDs](https://pdg.lbl.gov/2020/reviews/rpp2020-rev-monte-carlo-numbering.pdf)
+- [PyGithub](https://github.com/PyGithub/PyGithub) to access the Github REST API
+- [Requests](https://pypi.org/project/requests/) to send HTTP requests for accessing and validating web addresses
+- [zenodo_get](https://github.com/dvolgyes/zenodo_get) to download files from Zenodo records (only used in Python 3 environment)
+
+## Setup (the first time only)
+This package can be setup by simply downloading it from the git repository and installing the necessary dependencies. To download this package:
+```bash
+$ git clone git@github.com:Neubauer-Group/UFOManager.git
+```
+Since some of the older UFO models are only available in `Python 2`, we have made `UFOUpload.py` compatible with both versions of `Python`. However, we strongly recommend using the prescribed setup in `requirements_Python2.txt` and `requirements_Python3.txt` while using this package with the desired `Python` version. The desired setup can be obtained by setting up a [conda](https://docs.conda.io/en/latest/miniconda.html) environment. For `Python 2`, run-
+```bash
+$ conda create --name ufo2 python=2.7
+```
+and for `Python 3`, run-
+```bash
+$ conda create --name ufo3 python=3.8
+```
+After creating the desired environment, run the following command to setup the dependencies-
+```
+$ conda activate ufo<N>
+$ pip install -r requirements_Python<N>.txt
+```
+Here `<N>` is the `Python` version. This will setup the necessary dependencies for the corresponding environment. 
+**Note:** `UFODownload.py` is not supported with `Python 2`, so users willing to search and download models should setup the environment in `Python 3`.
+
+## Before Using the Package (every time)
+Every time before using the package, one should properly setup the environment and update the `PYTHONPATH` environment variable.
+```
+$ conda activate ufo<N>
+$ cd <path-to-UFOManager>
+$ export PYTHONPATH=$PYTHONPATH:$PWD
+```
+
+<!---
+### Environment Build
+It is recommended A Python virtual environment is recommended for executing `Uploadv2.py` or `Uploadv3.py` in command line interface. Necessary Python packages need to be installed. The Python2 support is enabled since many of the existing UFO models have been developed in Python2 and still used as is or with conversion locally performed by Python version conversion tools provided as plug-ins with Monte Carlo Generator Softwares like MadGraph.
+
+**To run the script with Python3** (i.e. to use `Uploadv3.py`), one needs to build a Python3 virtual environment. You can do it with
+```bash
+$ python3 -m venv Your_virtual_envirenment_name
+```
+to create a Python3 virtual environment directly in your working environment;then, use
+```bash
+$ . Your_virtual_envirenment_name/bin/activate
+```
+to activate your envirenment.
+After that, install neccessary packages,
+```bash
+(Your_virtual_envirenment_name)$ pip install requests PyGithub termcolor
+```
+**To run the script with Python2** (i.e. to use `Uploadv2.py`), one needs to build a Python2 virtual environment within a Python3 supported system. This needs installing the `virtualenv` package:
+```bash
+$ python3 -m pip install virtualenv
+```
+Then, construct the virtual environment and activate the environment in a similar way,
+```bash
+$ virtualenv --python=python2.7 Your_virtual_envirenment_name
+$ . Your_virtual_envirenment_name/bin/activate
+```
+After that, install necessary packages in the same way,
+```
+(Your_virtual_envirenment_name)$ python -m pip install requests PyGithub termcolor
+```
+--->
+
+
+# Using `UFOUpload`: Model Validation, Metadata Generation, and Preservation
+Developers can use `UFOUpload.py` to validate the structure and content as well as publish their models with persistent **D**igital **O**bject **I**dentifiers (DOIs). When provided with the model files and some basic model inforamtion, the Upload function can examine the validation of model files,  generate metadata in the formal of a `json` file for the model, publish the model to [Zenodo](https://zenodo.org/), and make the metadata available via another repository [UFOMetadata](https://github.com/Neubauer-Group/UFOMetadata) for preservation.
+
+## File Preparation
+To use the Upload function you need to create a directory `Your_Model_Folder` that will contain the UFO model as a compressed folder with extensions (`.tar, .tar.gz, .tgz, .zip`) or as a directory itself. An additional json file called `metadata.json` is needed to provide basic information about the model.
+
+For compressed folders, tarball and zip are accepted with UFO model python scripts inside the folder.
+```
+--Your_Model_Folder
+ --metadata.json
+ --Your_Model.zip/.tgz/.tar.gz
+   --_init_.py
+   --object_library.py
+    ...
+```
+or
+```
+--Your_Model_Folder
+ --metadata.json
+ --Your_Model.zip/.tgz/.tar.gz
+   --Your_Model Folder
+    --_init_.py
+    --object_library.py
+    ...
+```
+For metadata.json, some basic information is required. You can see the requirements in [this example](https://github.com/Neubauer-Group/UFOManager/blob/main/metadata.json). For author information in `metadata.json`, affiliation and contact are optional, but at least one contact is needed. It also requires a reference to an associated publication (either an [arxiv](https:://arxiv.org) Id or a `DOI`) that contains the necessary physics details and validation.
+
+Finally, developers need to prepare a `.txt` file containing **full** paths to their models, each path lies in a single line, for example, in the `.txt` file
+```
+path-to-model1
+path-to-model2
+...
+```
+
+## Usage
+From the command line, one can run-
+```bash
+$  python -m UFOManager.UFOUpload <command>
+```
+where `<command>` represents one of the 5 choices-
+```
+'Validation check' 
+'Generate metadata' 
+'Upload model' 
+'Update new version' 
+'Upload metadata to GitHub'
+``` 
+
+The script runs in an interactive manner, requirung the user to provide the path to the `.txt` file containing paths to models:
+```bash
+$ Please enter the path to a text file with the list of all UFO models: <path-to-txt-file>
+```
+The same functionality can be achieved from an independent facilitator script and including the following lines:
+```
+from UFOManager import UFOUpload
+UFOUpload.UFOUpload(<command>, <path-to-txt-file>)
+```
+
+
+### Validation check
+At this point the script will first check your file preparation, like whether your folder contains only two files required, and whether your `metadata.json` contains necessary information. After that,your model's validation will be checked. Your model will be checked whether it can be imported as a complete python package, since event generators require model input as a complete python package. It will read through your necessary model dependent files, check the completeness of those files and generate basic model-related information, such as particles defined in your model, number of vertices defined in your model.
+
+### Generate metadata
+This request will go through the validation check of your model first and generate necessary model-related information. Then, some information is required from developers:
+```
+$ Please name your model: <model-name>
+$ Please enter your model version: <model-version>
+```
+Note that the model will be given a default DOI of `0` in the enriched metadata unless the `Model doi` field is already present in the initial metadata. If you are using this functionality for a model for which a DOI already exists, you should provide that information in the initial metadata file.
+
+The new enriched metadata `json` file will be created inside `Your_Model_Folder`. 
+```
+--Your_Model_Folder
+ --metadata.json
+ --Your_Model.zip/.tgz/.tar.gz
+ --Your_Model.json
+```
+You can see an [example enriched metadata file](https://github.com/Neubauer-Group/UFOMetadata/blob/main/metadata.json) stored in the [UFOMetadata](https://github.com/Neubauer-Group/UFOMetadata) repository.
+
+### Upload model    
+At the beginning, your Zenodo personal access token and your GitHub personal access token will be required.
+```
+$ Please enter your Zenodo access token: <Zenodo_Access_Token>
+$ Please enter you Github access token: <Github_Access_Token> 
+```
+For your Zenodo personal access token, `deposit:actions` and `desposit:write` should be allowed.
+
+The script will go through the validation check of your model, generate the enriched metadata, and then use the [`Zenodo API`](https://developers.zenodo.org/) to publish your model to Zenodo and get a DOI for your model. 
+
+<!---
+During the upload, your need to name your model/give title of your upload. Other neccessary information, creators and description, will be directly from your metadata.json.
+```bash
+$ Please name your model: <model name
+```
+```
+$ Please enter your model version: Your model version
+```
+--->
+
+If everything goes well, you can see a new draft in your Zenodo account. A reserved Zenodo DOI will be created. The new metadata file will be created in `Your_Model_Folder`. After that, the [UFO Models Preservation repository](https://github.com/Neubauer-Group/UFOMetadata) used for metadata preservation will be forked in your Github account, the new metadata will be added. 
+
+**Note**: If you forked [UFOMetadata](https://github.com/Neubauer-Group/UFOMetadata) before, make sure that your forked branch is up-to-date with orginal one.
+
+Before finally publishing your model and uploading new enriched metadata to GitHub, you can make some changes to your Zenodo draft. And you can choose whether to continue
+```
+$ Do you want to publish your model and send your new enriched metadata file to GitHub repository UFOMetadata? Yes or No: Yes, or No
+```
+If you choose `Yes`, your model will be published to Zenodo, a pull request of your new enriched metadata will be created. A CI-enabled autocheck will run when pull request is made. This check may last for 5 minutes to make sure that model's DOI page is avaliable. If any problem happens, please contact Zijun Wang (zijun4@illinois.edu) or Avik Roy (avroy@illinois.edu).
+
+If you choose `No`, you can publish your model by yourself. You can visit the associated Zenodo draft, edit it and publish. Afterwards, you can create the pull request to add your enriched metadata to [UFOMetadata](https://github.com/Neubauer-Group/UFOMetadata) by yourself, or send your enriched metadata file to Zijun Wang (zijun4@illinois.edu) or Avik Roy (avroy@illinois.edu).
+
+### Update new version
+To allow this functionality, your initial `metadata.json` needs to add a new key-value pair
+```
+"Existing Model Doi": "Zenodo-issued concept-DOI for your model"
+```
+The concept-DOI is a unique identifier issued by Zenodo to access all available versions of the model and always resolves to the latest version. 
+
+Afterwards, Upload script will work in a way similar to what it would do with 'Upload model'. 
+
+### Upload metadata to GitHub
+If you previously uploaded your model to Zenodo and want to create an enriched metadata for your model, you need to add a key-value pair
+```
+"Model Doi": "Zenodo DOI of your model"
+```
+in metadata.json. Your GitHub personal access token will be required for this functionality. The script will go through the validation check of your model, the enriched metadata file will be created in `Your_Model_Folder`. After that, the [UFO Models Preservation repository](https://github.com/Neubauer-Group/UFOMetadata) used for metadata preservation will be forked in your Github account, the new metadata will be added, and pull request will be made.
+
+
+### Dealing with errors
+You will be given feedback when most errors happen. **If an error happens when you are uploading your model to Zenodo or uploading metadata to GitHub, it is recommended to delete the draft in Zenodo and the newly created enriched metadata in your forked branch before re-running the script**
+
+# Using `UFODownload`: Search and Download UFO models
+Users can use `UFODownload.py` to search for UFO models using the metadata preserved in [UFOMetadata repository](https://github.com/Neubauer-Group/UFOMetadata) and download them from Zenodo. It will require the user to run the commands with `Python 3`.
+
+<!---
+### Environment Build
+The `Download.py` is developed only for python 3. A Python virtual environment is recommended for executing this script with command line interface. 
+
+In the Python3 supported system, you can use
+```bash
+$ python3 -m venv Your_virtual_envirenment_name
+```
+to create a Python3 virtual environment directly in your working environment;then, use
+```bash
+$ . Your_virtual_envirenment_name/bin/activate
+```
+to activate your envirenment.
+
+The `Download.py` utilizes [zenodo_get](https://github.com/dvolgyes/zenodo_get) from David Völgyes, detailed citation information is included within the python script. To install necessary prerequisites, run
+```bash
+pip install requests PyGithub zenodo_get termcolor tabulate
+```
+--->
+
+## Usage
+From the command line, one can run-
+```bash
+$  python -m UFOManager.UFODownload <command>
+```
+where `<command>` represents one of the 3 choices-
+```
+'Search for model' 
+'Search and Download'
+'Download model'
+``` 
+The same functionality can be achieved from an independent facilitator script and including the following lines:
+```
+from UFOManager import UFODownload
+UFODownload.UFODownload(<command>)
+```
+
+### Search for model
+Currently, the Download.py supports search on four types of information through UFO model metadata files: corresponding paper id of the model, Model's Zenodo DOI, pdg codes or names of particles in the model. You need to interactively make a choice-
+```bash
+$ Please choose your keyword type: Paper_id, Model Doi, pdg code, or name
+```
+Then, you can can start your search. For `Paper_id` and `Model Doi`, one input value is allowed. But you can input multiple particles' names/pdg codes, separated them with ','.
+```bash
+$ Please enter your needed pdg code: <code1,code2,...>
+```
+or
+```bash
+$ Please enter your needed particle name: <name1,name2,...>
+```
+**Note**: Your input particles should not be all elementary particles!!!
+
+Then, you will get a feedback table containing metadata file name, model name, paper_id, and model DOI of UFO models fit your search. 
+
+Also, you can restart the search by responding `Yes` to the following question:
+```bash
+$ Do you still want to search for models? Please type in Yes or No.
+```
+
+### Search and Download 
+It will start with the usual steps of searching for models. Once your ssearch is complete, it will give you a list of metadata file names (with `.json` extensions). You can download UFO models you need, by typing in their corresponding metadata file full names and separated them with ','.  
+```
+$ You can choose the metadata you want to download: <meta1.json,meta2.json,...>
+```
+After that, you will be asked to create a folder, and all UFO models you need will be downloaded to that folder.
+```bash
+$ Please name your download folder: <Your_Download_Folder>
+```
+And the folder is under your current working path.
+```
+--Your current working path
+ --Download.py
+ --Your_Download_Folder
+```
+
+### Download model
+
+Then, you can download UFO models you need, by typing in their corresponding metadata file full name (.json is required) and separated them with ','. You can find the full names from your search feedback. 
+
+# References
+
+This work was done as a part of the IRIS-HEP Fellowship project for Zijun Wang under the mentorship of Avik Roy, Mark S Neubauer, and Matthew Feickert. The presentation is available at [this link](https://indico.cern.ch/event/1195270/contributions/5043771/attachments/2508513/4311003/Zijun_Wang_IRIS-HEP_Presentation.pdf).
+
+To cite this work, add to your bibliography-
+```
+Neubauer, M. S., Roy, A., & Wang, Z. (2022). Making Digital Objects FAIR in High Energy Physics: An Implementation for Universal FeynRules Output (UFO) Models. arXiv preprint arXiv:2209.09752.
+```
+or use the following bibtex entry-
+```
+@article{neubauer2022making,
+  title={Making Digital Objects FAIR in High Energy Physics: An Implementation for Universal FeynRules Output (UFO) Models},
+  author={Neubauer, Mark S and Roy, Avik and Wang, Zijun},
+  journal={arXiv preprint arXiv:2209.09752},
+  year={2022}
+}
+```
diff --git a/UFOManager-2.0.0/UFOManager/UFODownload.py b/UFOManager-2.0.0/UFOManager/UFODownload.py
new file mode 100644
index 0000000000000000000000000000000000000000..ce31d782183b03cca8619b33740aa31e8bc93250
--- /dev/null
+++ b/UFOManager-2.0.0/UFOManager/UFODownload.py
@@ -0,0 +1,316 @@
+import sys
+from termcolor import colored
+if sys.version_info.major == 2:
+    raise Exception(colored('UFODownload.py only works for Python 3','red'))
+from github import Github
+import os
+import requests
+import shutil
+import json
+import zenodo_get
+from getpass import getpass
+import argparse
+from tabulate import tabulate
+from termcolor import colored
+from particle import PDGID
+# This python script utilizes zenodo_get package from David Volgyes
+# David Volgyes. (2020, February 20). Zenodo_get: a downloader for Zenodo records (Version 1.3.4).
+# Zenodo. https://doi.org/10.5281/zenodo.1261812
+
+# Bibtex format:
+'''@misc{david_volgyes_2020_10.5281/zenodo.1261812,
+author  = {David V\"{o}lgyes},
+title   = {Zenodo_get: a downloader for Zenodo records.},
+month   = {2},
+year    = {2020},
+doi     = {10.5281/zenodo.1261812},
+url     = {https://doi.org/10.5281/zenodo.1261812}
+}'''
+
+
+def AccessGitRepo(Github_Access_Token):
+    # Connect to Github Repository
+
+    g = Github(Github_Access_Token)
+    repo = g.get_repo("Neubauer-Group/UFOMetadata")
+    Allmetadata = repo.get_contents('Metadata')
+
+    # Create a temporary folder for metadata
+    try:
+        os.mkdir('MetadatafilesTemporaryFolder')
+    except FileExistsError:
+        os.chdir('MetadatafilesTemporaryFolder')
+        return
+
+    os.chdir('MetadatafilesTemporaryFolder')
+
+    # Download all metadata files from GitHub Repository
+    for i in Allmetadata:
+        name = i.name
+        url = 'https://raw.githubusercontent.com/Neubauer-Group/UFOMetadata/main/Metadata/'
+        url += name
+        metadata = requests.get(url)
+        open(name,'wb').write(metadata.content)
+
+def Display(jsonlist):
+    display_data = []    
+    for file in jsonlist:
+        with open(file,encoding='utf-8') as metadata:
+            metadatafile = json.load(metadata)
+        if 'arXiv' in metadatafile['Paper_id']:
+            information = [file,metadatafile['Model name'],metadatafile['Paper_id']['arXiv'],metadatafile['Model Doi']] #metadatafile['Description']]
+        else:
+            if 'doi.org' in metadatafile['Paper_id']['doi']:
+                information = [file,metadatafile['Model name'],metadatafile['Paper_id']['doi'][16:],metadatafile['Model Doi']] #,metadatafile['Description']]
+            else:
+                information = [file,metadatafile['Model name'],metadatafile['Paper_id']['doi'],metadatafile['Model Doi']] #,metadatafile['Description']]
+        display_data.append(information)
+    
+    print(tabulate(display_data, headers=["Metadata file","Model Name","Paper ID","Model DOI"]))
+        
+
+def Search(Github_Access_Token):
+    global api_path
+    valid_search_keys = ['Paper_id', 'Model Doi', 'pdg code', 'name']
+    # Start the Interface
+    print('You can search for model with {}, {}, {}, {} (of certain particles).'.format(colored('Paper_id', 'magenta'),
+                                                                                                                colored('Model Doi', 'magenta'),
+                                                                                                                colored('pdg code', 'magenta'),
+                                                                                                                colored('name', 'magenta'))
+    )
+    all_json_file = set()
+    
+    # Now allows multiple times search
+    while True:
+        search_type = input('Please choose your keyword type: ')
+        if search_type not in valid_search_keys:
+            print(colored('Invalid Keyword!', 'red'))
+        
+        # Search for models with corresponding paper id
+        if search_type == 'Paper_id':
+            paper_id = input('Please enter your needed paper_id: ')
+            
+            target_list = []
+            for file in os.listdir('.'):
+                with open(file,encoding='utf-8') as metadata:
+                    metadatafile = json.load(metadata)
+                paper_ids = [metadatafile['Paper_id'][i] for i in metadatafile['Paper_id']]
+                if paper_id in paper_ids:
+                    target_list.append(file)
+            
+            if len(target_list) == 0:
+                print('There is no model associated with the paper_id ' + colored(paper_id,'red') + ' you are looking for.')
+            else:
+                print('Based on your search, we find models below:')
+                Display(jsonlist=target_list)
+                all_json_file = all_json_file.union(target_list)
+        
+        # Search for models with corresponding model Doi from Zenodo
+        if search_type == 'Model Doi':
+            Model_Doi = input('Please enter your needed Model doi: ')
+
+            model_name = ''
+            target_list = []
+            current_working_doi = ''
+            for file in os.listdir('.'):
+                with open(file,encoding='utf-8') as metadata:
+                    metadatafile = json.load(metadata)
+                if Model_Doi == metadatafile['Model Doi']:
+                    model_name = file
+                    break
+                if 'Existing Model Doi' in metadatafile and metadatafile['Existing Model Doi'] == Model_Doi:
+                    model_name = file
+                    break
+            if len(model_name) != 0:
+                target_list.append(model_name)
+                with open(model_name,encoding='utf-8') as metadata:
+                    _metadatafile = json.load(metadata)
+                if 'Existing Model Doi' in _metadatafile:
+                    current_working_doi = _metadatafile['Existing Model Doi']
+                    for file in os.listdir('.'):
+                        if file in target_list:
+                            continue
+                        with open(file,encoding='utf-8') as metadata:
+                            metadatafile = json.load(metadata)
+                        if 'Existing Model Doi' in metadatafile:
+                            if metadatafile['Existing Model Doi'] == current_working_doi:
+                                target_list.append(file)
+                            continue
+                        this_doi = metadatafile['Model Doi']
+                        r = requests.get("https://doi.org/" + this_doi)
+                        for line in r.iter_lines():
+                            line = str(line)
+                            if 'Cite all versions' not in line:
+                                continue
+                            conceptdoi = line.split("https://doi.org")[1].split('</a>')[0].split('">')[1]
+                            break
+                        if current_working_doi == conceptdoi:
+                            target_list.append(file)
+                print('Based on your search, we find models below:')                                                        
+                Display(jsonlist=target_list)
+                all_json_file = all_json_file.union(target_list)
+            else:
+                print('There is no model associated with the Model Doi ' + colored(Model_Doi,'red') + ' you are looking for.') 
+            
+
+        
+        # Search for models with particles' pdg codes
+        if search_type == 'pdg code':
+            pdg_code = [f.strip() for f in input('Please enter your needed pdg code: ').split(',')]
+            pdg_code_list = [int(i) for i in pdg_code]
+            target_list = []
+            elementary_particles_list = []
+            for i in pdg_code_list:
+                if PDGID(i).is_sm_quark == True or PDGID(i).is_sm_gauge_boson_or_higgs == True or PDGID(i).is_sm_lepton == True:
+                    elementary_particles_list.append(i)
+            elementary_particle_compare = all(i in elementary_particles_list for i in pdg_code_list)
+            Feedback = 'All particles you are looking for are elementary particles which are contained by all models. Please try again with BSM particles.'
+            if elementary_particle_compare:
+                print(colored(Feedback,'red'))
+            else:
+                for file in os.listdir('.'):
+                    with open(file,encoding='utf-8') as metadata:
+                        metadatafile = json.load(metadata)
+                    All_particles_pdg_code = [metadatafile['All Particles'][i] for i in metadatafile['All Particles']]
+                    pdg_dict = {}
+                    pdg_code_compare_result = all(i in All_particles_pdg_code for i in pdg_code_list)
+                    if pdg_code_compare_result:
+                        target_list.append(file)
+                    
+                if len(target_list) == 0:
+                    print('There is no model containing all particle(s) with pdg code' + colored(pdg_code,'red') + ' you are looking for.')
+                else:
+                    print('Based on your search, we find models below:')
+                    Display(jsonlist=target_list)
+                    all_json_file = all_json_file.union(target_list)
+                    
+        
+        # Search for models with particles' names
+        if search_type == 'name':
+            particle_name_list = [f.strip() for f in input('Please enter your needed particle name: ').split(',')]
+            pdg_code_corresponding_list = []
+            target_list = []
+
+            for file in os.listdir('.'):
+                with open(file,encoding='utf-8') as metadata:
+                    metadatafile = json.load(metadata)
+                All_particles_name_list = [i for i in metadatafile['All Particles']]
+                particle_compare_result = all(i in All_particles_name_list for i in particle_name_list)
+                if particle_compare_result:
+                    pdg_code_corresponding_list = [metadatafile['All Particles'][i] for i in particle_name_list]
+                    break
+            if pdg_code_corresponding_list != []:
+                elementary_particles_list = []
+                for i in pdg_code_corresponding_list:
+                    if PDGID(i).is_sm_quark == True or PDGID(i).is_sm_gauge_boson_or_higgs == True or PDGID(i).is_sm_lepton == True:
+                        elementary_particles_list.append(i)
+                elementary_particle_compare = all(i in elementary_particles_list for i in pdg_code_corresponding_list)
+                Feedback = 'All particles you are looking for are elementary particles which are contained by all models. Please try again with BSM particles.'
+                if elementary_particle_compare:
+                    print(colored(Feedback,'red'))
+                else:
+                    for file in os.listdir('.'):
+                        with open(file,encoding='utf-8') as metadata:
+                            metadatafile = json.load(metadata)
+                        All_particles_pdg_code = [metadatafile['All Particles'][i] for i in metadatafile['All Particles']]
+                        pdg_dict_from_particles = {}
+                        pdg_code_compare_result_from_particles = all(i in All_particles_pdg_code for i in pdg_code_corresponding_list)
+                        if pdg_code_compare_result_from_particles:
+                            target_list.append(file)
+            else:
+                print('There is no model containing all particle(s) ' + colored(particle_name_list,'red') + ' you are looking for.')
+
+            if len(target_list) != 0:
+                print('Based on your search, we find models below:')
+                Display(jsonlist=target_list)
+                all_json_file = all_json_file.union(target_list)
+                                
+
+        # Stop the loop and exit search part     
+        if input('Do you still want to search for models? Please type in {} or {}: '.format(colored('Yes', 'green'), colored('No','red'))) == 'No':
+            break
+        
+    return list(all_json_file)
+
+
+
+def Downloader(Github_Access_Token, filelist=None):
+    global api_path
+    print('Here is the UFOModel metadata file list.')
+    if not filelist:
+        print("\n".join(list(os.listdir('.'))))
+    else:
+        print("\n".join(filelist))
+
+    # Start download part
+    download_command = input('Enter a comma separated list of metadata filenames from the list above to download corresponding files: ')
+
+    # Get models' doi
+    download_list = [f.strip() for f in download_command.split(',')]
+    download_doi = []
+    for file in download_list:
+        with open(file,encoding='utf-8') as metadata:
+            metadatafile = json.load(metadata)
+        download_doi.append(metadatafile['Model Doi'])
+
+    os.chdir(api_path)
+
+    foldername = input('Please name your download folder: ')
+    
+    try:
+        os.mkdir(foldername)
+        os.chdir(foldername)
+    except FileExistsError:
+        os.chdir(foldername)
+
+    # Download model files from zenodo using zenodo_get created by David Volgyes
+    for i in download_doi:
+        single_download_doi = []
+        single_download_doi.append(i)
+        zenodo_get.zenodo_get(single_download_doi)
+
+    print('You have successfully downloaded your needed models in %s under the same path with this python script.' %(foldername))
+
+def Search_Download(Github_Access_Token):
+    jsonlist = Search(Github_Access_Token)
+    Downloader(Github_Access_Token, jsonlist)
+
+def Delete():
+    global api_path
+    os.chdir(api_path)
+    shutil.rmtree('MetadatafilesTemporaryFolder')
+
+def UFODownload(command):
+    global api_path
+    api_path = os.getcwd()
+    Github_Access_Token = getpass('Please enter you Github access token:')
+    AccessGitRepo(Github_Access_Token=Github_Access_Token)
+
+    if command == 'Search for model':
+        Search(Github_Access_Token=Github_Access_Token)
+        Delete()
+    elif command == 'Download model':
+        Downloader(Github_Access_Token=Github_Access_Token)
+        Delete()
+    elif command == 'Search and Download':
+        Search_Download(Github_Access_Token=Github_Access_Token)
+        Delete()
+    else:
+        print('Wrong command! Please choose from ["Search for model", "Download model", "Search and Download"].')
+        Delete()
+
+if __name__ == '__main__':
+    FUNCTION_MAP = {'Search for model' : Search,
+                'Download model' : Downloader,
+                'Search and Download': Search_Download}
+
+    parser = argparse.ArgumentParser()
+    parser.add_argument('command', choices=FUNCTION_MAP.keys())
+    args = parser.parse_args()
+    RunFunction = FUNCTION_MAP[args.command]
+
+    Github_Access_Token = getpass('Please enter you Github access token:')
+    api_path = os.getcwd()
+    AccessGitRepo(Github_Access_Token=Github_Access_Token)
+    RunFunction(Github_Access_Token=Github_Access_Token)
+    Delete()
diff --git a/UFOManager-2.0.0/UFOManager/UFOUpload.py b/UFOManager-2.0.0/UFOManager/UFOUpload.py
new file mode 100644
index 0000000000000000000000000000000000000000..7a66ad9cdf0f33cf57cf65ec80e1447b1f814b87
--- /dev/null
+++ b/UFOManager-2.0.0/UFOManager/UFOUpload.py
@@ -0,0 +1,1223 @@
+import os
+import zipfile
+import tarfile
+import shutil
+import sys
+import json
+import importlib
+import requests
+from github import Github
+from getpass import getpass
+import argparse
+from termcolor import colored, cprint
+import re
+import datetime
+from particle import PDGID
+
+if sys.version_info.major == 3:
+    raw_input = input
+
+regex = r'[^@]+@[^@]+\.[^@]+'
+
+def validator(model_path):
+
+    print("\nChecking Model: " + colored(model_path, "magenta") + "\n")
+    '''    Check for necessary files and their formats   '''
+    # List of all files in the folder
+    original_file = os.listdir(model_path)
+
+    #  Check if the folder contains and only contains required files
+    assert len(original_file) == 2, \
+        'File Count check: ' + colored('FAILED! More than two files inside the given directory'.format(model_path), 'red')
+    print('File count check: ' + colored('PASSED!', 'green'))
+
+    assert 'metadata.json' in original_file, \
+        'Check if initial "metadata.json" exists: ' + colored('FAILED!', 'red')
+    try:
+        metadata = open('metadata.json')
+        file = json.load(metadata)
+    except:
+        raise Exception(colored('Check if initial "metadata.json" is correctly formatted: ') + colored('FAILED!', 'red'))
+    print('Check if initial "metadata.json" exists and correctly formatted: ' + colored('PASSED!', 'green'))
+
+
+    '''    Check the content of metadata.json    '''
+    
+    # Check uploaded metadata.json for author names and contacts
+    
+    try:
+        assert file['Author']
+    except:
+        raise Exception(colored('"Author" field does not exist in metadata', 'red'))
+    all_contact = []
+    for i in file['Author']:
+        try:
+            assert i['name'].strip()
+        except:
+            raise Exception(colored('At least one author name does not exist in metadata', 'red'))
+        if 'contact' in i:
+            if not re.match(regex, i['contact'].strip()):
+                print(colored('At least one author contact is not a valid email address, please check!', 'yellow'))
+            all_contact.append(i['contact'])
+    assert all_contact != [], colored('No contact information for authors exists ', 'red')
+    print('Check author information and contact information in initial metadata:' + colored(' PASSED!', 'green'))
+
+    # Check uploaded metadata.json for a paper id
+    
+    try:
+        assert file['Paper_id']
+    except:
+        raise Exception(colored('"Paper_id" field does not exist in metadata', 'red'))
+    assert 'doi' in file['Paper_id'] or 'arXiv' in file['Paper_id'], \
+        Exception(colored('"Paper_id" field does not contain doi or arXiv ID', 'red'))
+    if 'doi' in file['Paper_id']:
+        url = 'https://doi.org/' + file['Paper_id']['doi']
+        doi_webpage = requests.get(url)
+        assert doi_webpage.status_code < 400, colored('DOI does not resolve to a valid paper', 'red')
+    if 'arXiv' in file['Paper_id']:
+        url = 'https://arxiv.org/abs/' + file['Paper_id']['arXiv']
+        arXiv_webpage = requests.get(url)
+        assert arXiv_webpage.status_code < 400, colored('arxiv id does not resolve to a valid paper', 'red')
+    print('Check paper information in initial metadata:' + colored(' PASSED!', 'green'))
+
+
+    # Check uploaded metadata.json for model description
+    
+    try:
+        assert file['Description']
+    except:
+        raise Exception(colored('"Description" field does not exist in metadata', 'red'))
+
+    # Check uploaded metadata.json for Model Homepage if exists
+
+    if 'Model Homepage' in file:
+        try:
+            assert requests.get(file['Model Homepage']).status_code < 400
+        except:
+            raise Exception(colored('"Model Homepage" link is invalid in metadata', 'red'))
+    
+    '''    Unpack the model inside a directory called "ModelFolder"     '''
+    
+    for _file in original_file:
+        if _file.endswith('.json'):
+            pass
+        elif _file.endswith('.zip'):
+            with zipfile.ZipFile(_file, 'r') as zip:
+                zip.extractall('ModelFolder')
+        elif _file.endswith('.tar') or _file.endswith('.tgz') or _file.endswith('.tar.gz'):
+            tarfile.open(_file).extractall('ModelFolder')
+        elif os.path.isdir(_file):
+            cf = tarfile.open( _file + ".tgz", "w:gz")
+            for _f in os.listdir(_file):
+                cf.add(_file + '/' + _f)
+            cf.close()
+            shutil.move(_file, 'ModelFolder')
+            original_file.remove(_file)
+            original_file.append(_file + ".tgz")
+        else:
+            raise Exception(colored("Valid Format for UFO directory not found", "red"))
+
+                
+    '''    Check if the compressed folder contains a single model and
+           reorganize its content inside a directory called "ModelFolder"    '''
+    
+    ModelFolder_Files = os.listdir('ModelFolder')
+    if '__init__.py' not in ModelFolder_Files:
+        if len(ModelFolder_Files) != 1:
+            raise Exception(colored('Uncompressed content has too many files/folders', 'red'))
+        if '__init__.py' not in os.listdir('ModelFolder/' + ModelFolder_Files[0]):
+            raise Exception(colored('"__init__.py" not available within model, not a Python Package!', 'red'))
+        for _file in os.listdir('ModelFolder/' + ModelFolder_Files[0]):
+            if _file == "__pycache__" or _file.endswith(".pyc") or _file.endswith("~"):
+                shutil.rmtree('ModelFolder/' + ModelFolder_Files[0] + '/' + _file)
+                continue
+            shutil.copy('ModelFolder/' + ModelFolder_Files[0] + '/' +  _file, 'ModelFolder/' +  _file)
+        shutil.rmtree('ModelFolder/' + ModelFolder_Files[0])
+
+
+    '''    Check if the model can be loaded as a python package    '''
+    
+    sys.path.append(model_path)
+    modelloc = model_path + '/ModelFolder'
+    sys.path.insert(0,modelloc)
+
+    if sys.version_info.major == 3:
+        try:
+            UFOModel = importlib.import_module('ModelFolder')
+        except SyntaxError:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception(colored('The model may be not compatible with Python3 or have invalid code syntaxes. Please check and try with Python2 instead',
+                                    'red'))
+        except ModuleNotFoundError:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception(colored('The model may be missing some files, please check again', 'red'))
+        except AttributeError:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception(colored('Undefined variables in your imported modules, please check again', 'red'))
+        except NameError:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception(colored('Some modules/variables not imported/defined, please check again', 'red'))
+        except TypeError:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception(colored('At least one of the variables is missing required positional argument, please check again.','red'))
+    else:
+        try:
+            UFOModel = importlib.import_module('ModelFolder')
+        except SyntaxError:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception('Your model may have invalid syntaxes, please check again')
+        except ImportError:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception(colored('The model may be missing some files, please check again', 'red'))
+        except AttributeError:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception(colored('Undefined variables in your imported modules, please check again', 'red'))
+        except NameError:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception(colored('Some modules/variables not imported/defined, please check again', 'red'))
+        except TypeError:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception(colored('At least one of the variables is missing required positional argument, please check again.','red'))
+        
+    os.chdir('ModelFolder')
+
+    print("Check for module imported as a python package: " + colored("PASSED!", "green"))
+
+
+    '''    Check the existence of model-independent files    '''
+    
+    ModelFiles = os.listdir('.')
+    Neccessary_MI_Files = ['__init__.py', 'object_library.py', 'function_library.py', 'write_param_card.py']
+    Missing_MI_Files = [i for i in Neccessary_MI_Files if i not in ModelFiles]
+    if Missing_MI_Files != []:
+        print(colored('Your model lacks these files below'))
+        print(colored(', '.join(Missing_MI_Files), 'brown'))
+        raise Exception(colored('Sorry, your model misses some necessary model-independent files.', 'red'))
+    else:
+        print("Check if model contains necessary model-independent files: " + colored("PASSED!", 'green'))
+
+    '''    Check the existence of model-dependent files    '''
+    
+    Neccessary_MD_Files = ['parameters.py','particles.py','coupling_orders.py','couplings.py','lorentz.py','vertices.py']
+    Missing_MD_Files = [i for i in Neccessary_MD_Files if i not in ModelFiles]
+    if Missing_MD_Files != []:
+        print(colored('Your model lacks these files below'))
+        print(colored(', '.join(Missing_MD_Files), 'brown'))
+        raise Exception(colored('Sorry, your model misses some necessary model-dependent files.', 'red'))
+    else:
+        print("Check if model contains necessary model-dependent files: " + colored("PASSED!", 'green'))
+
+    '''    Check individual files within the model    '''
+    
+    # Check on model-independent files
+    try:
+        import object_library
+    except:
+        raise Exception(colored('The file "object_library.py" could not be imported. Please check again', 'red'))
+
+    # Check parameters.py and if it contains some parameters
+    try:
+        import parameters
+    except ImportError:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('The file "parameters.py" could not be imported. Please check again', 'red'))
+    
+ 
+    params = []
+    number_of_params = 0
+    for i in [item for item in dir(parameters) if not item.startswith("__")]:    
+        item = getattr(parameters,i)
+        if type(item) == (object_library.Parameter):
+            params.append(item.name)
+            number_of_params += 1
+
+    if len(params) == 0:
+        raise Exception(colored('There should be some parameters defined in "parameters.py"', 'red'))
+    else:
+        print('Check if model contains well behaved "parameters.py": ' + colored("PASSED!", 'green'))
+        print('The model contains %i parameters.' %(number_of_params))
+
+    del sys.modules['parameters']
+    
+    # Check particles.py and all particles
+    try:
+        import particles
+    except ImportError:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('The file "particles.py" could not be imported. Please check again', 'red'))
+
+    particle_dict = {}
+    SM_elementary_particle_dict = {}
+    BSM_elementary_particle_with_registered_PDGID_dict = {}
+    Particle_with_PDG_like_ID_dict = {}
+    pdg_code_list = []
+    for i in [item for item in dir(particles) if not item.startswith("__")]:
+        item = getattr(particles,i)
+        if type(item) == (object_library.Particle):
+            if item.GhostNumber == 0:
+                particle_dict[item.name] = item.pdg_code
+                pdg_code_list.append(item.pdg_code)
+
+                if PDGID(item.pdg_code).is_valid == True:
+                    if PDGID(item.pdg_code).three_charge != int(round(item.charge*3)):
+                        Particle_with_PDG_like_ID_dict[item.name] = {'id': item.pdg_code,
+                                                                     'spin': item.spin,
+                                                                     'charge': item.charge}
+                    if PDGID(item.pdg_code).j_spin != item.spin:
+                        if item.spin == 1 and PDGID(item.pdg_code).j_spin == None:
+                            pass
+                        else:
+                            Particle_with_PDG_like_ID_dict[item.name] = {'id': item.pdg_code,
+                                                                         'spin': item.spin,
+                                                                         'charge': item.charge}
+
+                    #if PDGID(item.pdg_code).is_quark or PDGID(item.pdg_code).is_lepton or PDGID(item.pdg_code).is_gauge_boson_or_higgs:
+                    if item.spin in [1,2,3]:   
+                        if PDGID(item.pdg_code).is_sm_quark or PDGID(item.pdg_code).is_sm_lepton or PDGID(item.pdg_code).is_sm_gauge_boson_or_higgs:
+                            SM_elementary_particle_dict[item.name] = item.pdg_code
+                        elif item.name not in Particle_with_PDG_like_ID_dict.keys():
+                            BSM_elementary_particle_with_registered_PDGID_dict[item.name] = item.pdg_code
+                else:
+                    Particle_with_PDG_like_ID_dict[item.name] = {'id': item.pdg_code,
+                                                                 'spin': item.spin,
+                                                                 'charge': item.charge}                    
+
+    if len(particle_dict) == 0:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('There should be real particles defined in "particles.py"', 'red'))
+
+    if len(set(pdg_code_list)) != len(pdg_code_list):
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('Some of your particles have same pdg code, please check again!', 'red'))
+
+    print('Check if model contains well behaved "particles.py": ' + colored("PASSED!", 'green'))
+
+    print('The model contains %i fundamental particles' %(len(SM_elementary_particle_dict)))
+
+    print('The model contains %i Standard Model elementary particles with registered pdg codes:'%(len(list(SM_elementary_particle_dict.keys()))))
+    for key in SM_elementary_particle_dict.keys():
+        print(key, SM_elementary_particle_dict[key])
+    
+    print('The model contains %i Beyond the Standard Model elementary particles with registered pdg codes:'%(len(list(BSM_elementary_particle_with_registered_PDGID_dict.keys()))))
+    for key in BSM_elementary_particle_with_registered_PDGID_dict.keys():
+        print(key, BSM_elementary_particle_with_registered_PDGID_dict[key])
+
+    print('The model contains %i particles with pdg-like codes:'%(len(list(Particle_with_PDG_like_ID_dict.keys()))))
+    for key in Particle_with_PDG_like_ID_dict.keys():
+        print(key, Particle_with_PDG_like_ID_dict[key])
+
+    del sys.modules['particles']
+
+    # Check vertices.py and if it contains vertices
+    try:
+        import vertices
+    except ImportError:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('The file "vertices.py" could not be imported. Please check again', 'red'))
+
+    vertex = []
+    number_of_vertices = 0
+    for i in [item for item in dir(vertices) if not item.startswith("__")]:
+        item = getattr(vertices,i)
+        if type(item) == (object_library.Vertex):
+            vertex.append(item.name)
+            number_of_vertices += 1
+
+    if len(vertex) == 0:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('There should be vertices defined in "vertices.py"', 'red'))
+    else:
+        print('Check if model contains well behaved "vertices.py": ' + colored("PASSED!", 'green'))
+        print('The model contains %i vertices' %(number_of_vertices))
+        
+    del sys.modules['vertices']
+
+    # Check coupling_orders.py and if it contains coupling orders
+    try:
+        import coupling_orders
+    except ImportError:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('The file "coupling_orders.py" could not be imported. Please check again', 'red'))
+
+    coupling_order = []
+    number_of_coupling_orders = 0
+    for i in [item for item in dir(coupling_orders) if not item.startswith("__")]:
+        item = getattr(coupling_orders,i)
+        if type(item) == (object_library.CouplingOrder):
+            coupling_order.append(item.name)
+            number_of_coupling_orders += 1
+
+    if len(coupling_order) == 0:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('There should be coupling orders defined in "coupling_orders.py"','red'))
+    else:
+        print('Check if model contains well behaved "coupling_orders.py": ' + colored("PASSED!", 'green'))
+        print('The model contains %i coupling_orders' %(number_of_coupling_orders))
+
+    del sys.modules['coupling_orders']
+
+
+    # Check couplings.py and if it contains couplings
+    try:
+        import couplings
+    except ImportError:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('The file "couplings.py" could not be imported. Please check again', 'red'))
+
+    coupling_tensor = []
+    number_of_coupling_tensors = 0
+    for i in [item for item in dir(couplings) if not item.startswith("__")]:
+        item = getattr(couplings,i)
+        if type(item) == (object_library.Coupling):
+            coupling_tensor.append(item.name)
+            number_of_coupling_tensors += 1
+
+    if len(coupling_tensor) == 0:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception('There should be coupling tensors defined in "couplings.py"')
+    else:
+        print('Check if model contains well behaved "couplings.py": ' + colored("PASSED!", 'green'))
+        print('The model contains %i couplings' %(number_of_coupling_tensors))
+
+    del sys.modules['couplings']
+
+
+    # Check lorentz.py and see if it contains lorentz tensors
+    try:
+        import lorentz
+    except ImportError:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('The file "lorentz.py" could not be imported. Please check again', 'red'))
+
+    lorentz_tensor = []
+    number_of_lorentz_tensors = 0
+    for i in [item for item in dir(lorentz) if not item.startswith("__")]:    
+        item = getattr(lorentz,i)
+        if type(item) == (object_library.Lorentz):
+            lorentz_tensor.append(item.name)
+            number_of_lorentz_tensors += 1
+
+    if len(lorentz_tensor) == 0:
+        os.chdir(model_path)
+        shutil.rmtree('ModelFolder')
+        raise Exception(colored('There should be lorentz tensors defined in "lorentz.py"', 'red'))
+    else:
+        print('Check if model contains well behaved "lorentz.py": ' + colored("PASSED!", 'green'))
+        print('The model contains %i lorentz tensors' %(number_of_lorentz_tensors))
+
+    del sys.modules['lorentz']
+
+
+    # Check if propagators.py contains propagators
+    try:
+        import propagators
+        props = []
+        number_of_propagators = 0
+        for i in [item for item in dir(propagators) if not item.startswith("__")]:
+            item = getattr(propagators,i)
+            if type(item) == (object_library.Propagator):
+                props.append(item.name)
+                number_of_propagators += 1
+
+        if len(props) == 0:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception('There should be propagators defined in "propagators.py"')
+        else:
+            print('Check if model contains well behaved "propagators.py": ' + colored("PASSED!", 'green'))
+            print('The model contains %i propagators' %(number_of_propagators))
+        del sys.modules['propagators']  
+    except ImportError:
+        number_of_propagators = 0
+        pass
+
+    # Check if decays.py contains decays
+    try:
+        import decays
+        decay = []
+        number_of_decays = 0
+        for i in [item for item in dir(decays) if not item.startswith("__")]:
+            item = getattr(decays,i)
+            if type(item) == (object_library.Decay):
+                decay.append(item.name)
+                number_of_decays += 1
+
+        if len(decay) == 0:
+            os.chdir(model_path)
+            shutil.rmtree('ModelFolder')
+            raise Exception('There should be decays defined in "decays.py"')
+        else:
+            print('Check if model contains well behaved "decays.py": ' + colored("PASSED!", 'green'))
+            print('The model contains %i decays' %(number_of_decays))
+        del sys.modules['decays']
+    except ImportError:
+        number_of_decays = 0
+        pass
+
+    # Check if the model supports NLO calculations
+    try:
+        import CT_couplings
+        CT_coupling = []
+        for i in [item for item in dir(CT_couplings) if not item.startswith("__")]:
+            item = getattr(CT_couplings,i)
+            if type(item) == (object_library.Coupling):
+                CT_coupling.append(item.name)
+        if len(CT_coupling) == 0:
+            Check_CTCouplings = False
+            print('Check if model contains well behaved "CT_couplings.py": ' + colored("FAILED!", 'red'))
+        else:
+            print('Check if model contains well behaved "CT_couplings.py": ' + colored("PASSED!", 'green'))
+            Check_CTCouplings = True
+        del sys.modules['CT_couplings']
+    except ImportError:
+        print('Check if model contains well behaved "CT_couplings.py": ' + colored("FAILED!", 'red'))
+        Check_CTCouplings = False
+
+    try:
+        import CT_parameters
+        CT_parameter = []
+        for i in [item for item in dir(CT_parameters) if not item.startswith("__")]:
+            item = getattr(CT_parameters,i)
+            if type(item) == (object_library.CTParameter):
+                CT_parameter.append(item.name)
+        if len(CT_parameter) == 0:
+            Check_CTParameters = False
+            print('Check if model contains well behaved "CT_parameters.py": ' + colored("FAILED!", 'red'))
+        else:
+            print('Check if model contains well behaved "CT_parameters.py": ' + colored("PASSED!", 'green'))
+            Check_CTParameters = True
+    except ImportError:
+        print('Check if model contains well behaved "CT_parameters.py": ' + colored("FAILED!", 'red'))
+        Check_CTParameters = False
+
+    try:
+        import CT_vertices
+        CT_vertice = []
+        for i in [item for item in dir(CT_vertices) if not item.startswith("__")]:
+            item = getattr(CT_vertices,i)
+            if type(item) == (object_library.CTVertex):
+                CT_vertice.append(item.name)
+        if len(CT_vertice) == 0:
+            Check_CTVertices = False
+            print('Check if model contains well behaved "CT_vertices.py": ' + colored("FAILED!", 'red'))
+        else:
+            print('Check if model contains well behaved "CT_vertices.py": ' + colored("PASSED!", 'green'))
+            Check_CTVertices = True
+    except ImportError:
+        print('Check if model contains well behaved "CT_vertices.py": ' + colored("FAILED!", 'red'))
+        Check_CTVertices = True
+    
+    if Check_CTCouplings == True and Check_CTParameters == True and Check_CTVertices == True:
+        print(colored('The model allows NLO calculations.','green'))
+        NLO_value = True
+    else:
+        print(colored('The model does not allow NLO calculations.','red'))
+        NLO_value = False
+    
+    # Finish the validation checking
+    os.chdir(model_path)
+    shutil.rmtree('ModelFolder')
+    sys.path.remove(model_path)
+    sys.path.remove(modelloc)
+    for f in [f for f in sys.modules.keys() if 'ModelFolder' in f]:
+        del sys.modules[f]
+    for f in ['particles', 'parameters', 'vertices', 'coupling_orders', 'couplings', 'lorentz', 'propagators', 'decays']:
+        if f in sys.modules.keys():
+            del sys.modules[f]        
+
+    return file, original_file, number_of_params, particle_dict, SM_elementary_particle_dict, Particle_with_PDG_like_ID_dict, BSM_elementary_particle_with_registered_PDGID_dict, number_of_vertices, number_of_coupling_orders, number_of_coupling_tensors, number_of_lorentz_tensors, number_of_propagators, number_of_decays, NLO_value
+
+
+def validator_all(all_models):
+    base_path = os.getcwd()
+    for _path in all_models:
+        os.chdir(_path)
+        _ = validator(model_path = os.getcwd())
+        os.chdir(base_path)
+
+
+def metadatamaker(model_path, create_file = True):
+    # Check Validation and get necessary outputs
+    file, original_file, number_of_params, particle_dict, SM_elementary_particle_dict, Particle_with_PDG_like_ID_dict, BSM_elementary_particle_with_registered_PDGID_dict, number_of_vertices, number_of_coupling_orders, number_of_coupling_tensors, number_of_lorentz_tensors, number_of_propagators, number_of_decays, NLO_value = validator(model_path)
+    filename = [i for i in original_file if i != 'metadata.json'][0]
+    print('\nWorking on model: ' + colored(model_path, "magenta") + "\n")
+    modelname = raw_input('Please name your model:')
+    modelversion = raw_input('Please enter your model version:')
+    Doi = "0"
+    if 'Model Homepage' in file:
+        Homepage = file['Model Homepage']
+    else:
+        if create_file:
+            Homepage = raw_input('Please enter your model homepage:')
+        else:
+            Homepage = ''
+    # if create_file or "Model Doi" not in file.keys():
+    #    Doi = raw_input('Please enter your Model Doi, enter 0 if not have one:')
+    if  "Model Doi" in file.keys():
+        Doi = file["Model Doi"]
+    newcontent = {'Model name' : modelname,
+                'Model Homepage' : Homepage,
+                'Model Doi' : Doi,
+                'Model Version' : modelversion,
+                'Model Python Version' : sys.version_info.major,
+                'Allows NLO calculations': NLO_value,
+                'All Particles' : particle_dict,
+                'SM particles' : SM_elementary_particle_dict,
+                'BSM particles with standard PDG codes': BSM_elementary_particle_with_registered_PDGID_dict,
+                'Particles with PDG-like IDs': Particle_with_PDG_like_ID_dict,
+                'Number of parameters' : number_of_params,
+                'Number of vertices' : number_of_vertices,
+                'Number of coupling orders' : number_of_coupling_orders,
+                'Number of coupling tensors' : number_of_coupling_tensors,
+                'Number of lorentz tensors' : number_of_lorentz_tensors,
+                'Number of propagators' : number_of_propagators,
+                'Number of decays' : number_of_decays
+                }
+
+    file.update(newcontent)
+    meta_name = filename.split('.')[0].strip()
+    if not meta_name:
+        raise Exception("Invalid filename: '{}', please check".format(filename))
+    metadata_name =  meta_name + '.json'
+    if create_file:
+        with open(metadata_name,'w') as metadata:
+            json.dump(file,metadata,indent=2)
+        # Check new metadata
+        print('Now you can see your the metadata file %s for your new model in the same directory.' %(colored(metadata_name, "magenta")))
+
+    return file, filename, modelname, metadata_name
+
+
+def metadatamaker_all(all_models):
+    base_path = os.getcwd()
+    for _path in all_models:
+        print("\nChecking Model: " + colored(_path, "magenta") + "\n")
+        os.chdir(_path)
+        _ = metadatamaker(model_path = os.getcwd())
+        os.chdir(base_path)
+
+def is_parent(child, parent):
+    if not child.parents:
+        return False
+    if child.sha == parent.sha:
+        return True
+    return is_parent(child.parents[0], parent)
+
+        
+def uploader(model_path, myrepo, myfork, params):
+    
+    '''    Generate the metadata for the model   '''
+    file, filename, modelname, metadata_name = metadatamaker(model_path, create_file=False)
+
+    '''Check metadata file name'''
+    Allmetadata = myrepo.get_contents('Metadata')
+    Allmetadataname = [i.name for i in Allmetadata]
+    while True:
+        if metadata_name in Allmetadataname:
+            url = 'https://raw.githubusercontent.com/Neubauer-Group/UFOMetadata/main/Metadata/'
+            url += metadata_name
+            metadata = requests.get(url)
+            open(metadata_name,'wb').write(metadata.content)
+            with open(metadata_name,encoding='utf-8') as metadata:
+                file = json.load(metadata)
+            DOI = file['Model Doi']
+            print('Your metadata file name has been used. Please check the model with DOI: ' + colored(DOI, 'red') + ' in Zenodo.')
+            os.remove(metadata_name)
+            continuecommand = raw_input('Do you want to continue your upload?' + \
+                                    colored(' Yes', 'green') + ' or' + colored(' No', 'red') + ':')
+            if continuecommand == 'Yes':
+                while True:
+                    metadata_name = raw_input('Please rename your metadata file:').replace(' ','_')
+                    try:
+                        assert metadata_name.endswith('.json')
+                        break
+                    except:
+                        print('Your metadata file name should end with ' + colored('.json', 'red') + '.')
+            else:
+                sys.exit()
+        else:
+            break
+
+    '''    Check if  Zenodo token works    '''    
+    # Create an empty upload
+    headers = {"Content-Type": "application/json"}
+    r = requests.post("https://zenodo.org/api/deposit/depositions", 
+                      params= params,
+                      json= {},
+                      headers= headers)
+    if r.status_code > 400:
+        print(colored("Creating deposition entry with Zenodo Failed!", "red"))
+        print("Status Code: {}".format(r.status_code))
+        raise Exception
+    
+    # Work with Zenodo API
+    
+    bucket_url = r.json()["links"]["bucket"]
+    Doi = r.json()["metadata"]["prereserve_doi"]["doi"]
+    deposition_id = r.json()["id"]
+
+    # Upload new files
+    path = model_path + '/%s' %(filename)
+    with open(path, 'rb') as fp:
+        r = requests.put("%s/%s" %(bucket_url, filename),
+                         data = fp,
+                         params = params)
+        if r.status_code > 400:
+            print(colored("Putting content to Zenodo Failed!", "red"))
+            print("Status Code: {}".format(r.status_code))
+            raise Exception
+
+    Author_Full_Information = [i for i in file['Author']]
+    Author_Information = []
+    for i in Author_Full_Information:
+        if 'affiliation' in i:
+            Author_Information.append({"name": i['name'],
+                                    "affiliation": i['affiliation']})
+        else:
+            Author_Information.append({"name": i['name']})
+
+    data = { 'metadata' : {
+            'title': modelname,
+            'upload_type': 'dataset',
+            'description': file['Description'],
+            'creators': Author_Information          
+        }
+    }
+
+    # Add required metadata to draft
+    r = requests.put('https://zenodo.org/api/deposit/depositions/%s' %(deposition_id),
+                     params=params,
+                     data=json.dumps(data),
+                     headers=headers)
+    if r.status_code > 400:
+        print(colored("Creating deposition entry with Zenodo Failed!", "red"))
+        print("Status Code: {}".format(r.status_code))
+        raise Exception
+    file["Model Doi"] = Doi
+
+    # Use Zenodo page as Homepage if there's no homepage provided
+    if len(file['Model Homepage']) == 0:
+        file['Model Homepage'] = 'https://doi.org/' + Doi
+
+    with open(metadata_name,'w') as metadata:
+        json.dump(file,metadata,indent=2)
+
+
+    '''    Upload to Github Repository    '''
+
+
+    # Create new metadata file in the forked repo
+    f= open(metadata_name).read()
+    GitHub_filename = 'Metadata/' + metadata_name
+    myfork.create_file(GitHub_filename, 'Upload metadata for model: {}'.format(metadata_name.replace('.json', '')), f, branch='main')
+
+    if r.status_code == 200:
+        print('Now you can go to Zenodo to see your draft at Doi: %s, make some changes, and be ready to publish your model.'%colored(Doi, 'magenta'))
+        publish_command = raw_input('Do you want to publish your model and send your new enriched metadata file to GitHub repository UFOMetadata? ' + \
+                                    colored(' Yes', 'green') + ' or' + colored(' No', 'red') + ':')
+        if publish_command == 'Yes':
+            r = requests.post('https://zenodo.org/api/deposit/depositions/%s/actions/publish' %(deposition_id),
+                              params=params)
+            if r.status_code != 202:
+                print(colored("Publishing model with Zenodo Failed!", "red"))
+                print(r.json())
+                raise Exception
+            print('Your model has been successfully uploaded to Zenodo with DOI: %s' %(Doi))
+            print('You can access your model in Zenodo at: {}'.format(r.json()['links']['record_html']))
+            print('\n\n')
+        else:
+            print("You can publish your model by yourself. Then, please send your enriched metadata file to %s. I will help upload your metadata to GitHub Repository."%colored("thanoswang@163.com/zijun4@illinois.edu", "blue"))
+    else:
+        print("Your Zenodo upload Draft may have some problems. You can check your Draft on Zenodo and publish it by yourself. Then, please send your enriched metadata file to %s. I will help upload your metadata to GitHub Repository."%colored("thanoswang@163.com/zijun4@illinois.edu", "blue"))
+
+
+def uploader_all(all_models):
+    
+    '''    Check if  Zenodo token works    '''
+    Zenodo_Access_Token = getpass('Please enter your Zenodo access token:')
+    params = {'access_token': Zenodo_Access_Token}
+    r = requests.get("https://zenodo.org/api/deposit/depositions", params=params)
+    if r.status_code > 400:
+        raise Exception(colored("URL connection with Zenodo Failed!", "red") + " Status Code: " + colored("{}".format(r.status_code), "red"))
+    print("Validating Zenodo access token: " + colored("PASSED!", "green"))
+    
+
+    '''    Check if Github token works    '''
+    Github_Access_Token = getpass('Please enter you Github access token:')
+    try:
+        g = Github(Github_Access_Token)
+        github_user = g.get_user()
+        # Get the public repo
+        repo = g.get_repo('Neubauer-Group/UFOMetadata')
+    except:
+        raise Exception(colored("Github access token cannot be validated", "red"))
+
+    print("Validating Github access token: " + colored("PASSED!", "green"))
+
+    
+    '''    Check if user's metadata repo is in sync with upstream    '''
+    # Create a fork in user's github
+    myfork = github_user.create_fork(repo)
+
+    # Check if the fork is up to date with main
+    if not is_parent(myfork.get_branch('main').commit,  repo.get_branch('main').commit): 
+        print(colored("Your fork of the UFOMetadata repo is out of sync from the upstream!", "red"))
+        print(colored("Please retry after syncing your local fork with upstream", "yellow"))
+        raise Exception
+
+    # Now put all models in zenodo and put their metadata in the local fork of metadata repo
+    base_path = os.getcwd()
+    for _path in all_models:
+        print("\nChecking Model: " + colored(_path, "magenta") + "\n")
+        os.chdir(_path)
+        uploader(model_path = os.getcwd(), myrepo= repo, myfork = myfork, params = params)
+        os.chdir(base_path)
+
+    # Pull Request from forked branch to original
+    username = g.get_user().login
+    body = 'Upload metadata for new model(s)'
+    pr = repo.create_pull(title="Upload metadata for a new model", body=body, head='{}:{}'.format(username,'main'), base='{}'.format('main'))
+    print('''
+    You have successfully upload your model(s) to Zenodo and created a pull request of your new enriched metadate files to GitHub repository''' + colored(' UFOMetadata', 'magenta') + '''. 
+    Your pull request to UFOMetadata will be checked by GitHub's CI workflow.
+    If your pull request failed or workflow doesn't start, please contact ''' +  colored('thanoswang@163.com/zijun4@illinois.edu' ,'blue')
+    )
+
+
+def newversion(model_path, myrepo, myfork, params, depositions):
+
+    '''    Check for necessary files and their formats    '''
+    original_file = os.listdir(model_path)
+
+    assert 'metadata.json' in original_file, \
+        'Check if initial "metadata.json" exists: ' + colored('FAILED!', 'red')
+    try:
+        metadata = open('metadata.json')
+        file = json.load(metadata)
+    except:
+        raise Exception(colored('Check if initial "metadata.json" is correctly formatted: ') + colored('FAILED!', 'red'))
+    print('Check if initial "metadata.json" exists and correctly formatted: ' + colored('PASSED!', 'green'))
+
+    '''    Check existing model DOI    '''
+    try:
+        assert file['Existing Model Doi']
+    except:
+        raise Exception(colored('"Existing Model Doi" field does not exist in metadata', 'red'))
+
+    try:
+        assert 'zenodo' in file['Existing Model Doi']
+    except:
+        raise Exception(colored('We suggest you to upload your model to Zenodo', 'red'))
+
+    url = 'https://doi.org/' + file['Existing Model Doi']
+    existing_model_webpage = requests.get(url)
+
+    try:
+        assert existing_model_webpage.status_code < 400
+    except:
+        raise Exception(colored('We cannot find your model page with your provided existing model doi.', 'red'))
+
+
+    '''    Generate the metadata for the model   '''
+    file, filename, modelname, metadata_name = metadatamaker(model_path, create_file=False)
+    
+    '''Check metadata file name'''
+    Allmetadata = myrepo.get_contents('Metadata')
+    Allmetadataname = [i.name for i in Allmetadata]
+    while True:
+        if metadata_name in Allmetadataname:
+            url = 'https://raw.githubusercontent.com/Neubauer-Group/UFOMetadata/main/Metadata/'
+            url += metadata_name
+            metadata = requests.get(url)
+            open(metadata_name,'wb').write(metadata.content)
+            with open(metadata_name,encoding='utf-8') as metadata:
+                file = json.load(metadata)
+            DOI = file['Model Doi']
+            print('Your metadata file name has been used. Please check the model with DOI: ' + colored(DOI, 'red') + ' in Zenodo.')
+            os.remove(metadata_name)
+            continuecommand = raw_input('Do you want to continue your upload?' + \
+                                    colored(' Yes', 'green') + ' or' + colored(' No', 'red') + ':')
+            if continuecommand == 'Yes':
+                while True:
+                    metadata_name = raw_input('Please rename your metadata file:').replace(' ','_')
+                    try:
+                        assert metadata_name.endswith('.json')
+                        break
+                    except:
+                        print('Your metadata file name should end with ' + colored('.json', 'red') + '.')
+            else:
+                sys.exit()
+        else:
+            break
+
+    '''    Find corresponding old version from the concept DOI    '''
+    filenames = []
+    found_entry = False
+    entry = None
+    for _entry in depositions:
+        if _entry['conceptdoi'].strip().split('.')[-1] == file['Existing Model Doi'].strip().split('.')[-1]:
+            found_entry = True
+            entry = _entry
+
+    assert found_entry, colored('The zenodo entry corresponding to DOI: {} not found'.format(file['Existing Model Doi']), 'red')
+
+    old_deposition_id = entry['links']['latest'].strip().split('/')[-1]
+    _r = requests.get("https://zenodo.org/api/records/{}".format(old_deposition_id), params=params)
+    for _file in _r.json()['files']:
+        link = _file['links']['self'].strip()
+        fname = link.split('/')[-1]
+        filenames.append(fname)
+
+    print('Your previous upload contains the file(s): %s. Do you want to delete them?' %(colored(','.join(filenames), 'magenta')))
+
+    deletelist = [f.strip() for f in raw_input('Please enter file names you want to delete in your new version, separated names with comma, or Enter ' + colored("No", "red") + ": ").split(',')]
+
+
+    # Work with new version draft
+    '''    Request a  new version    '''
+    r = requests.post("https://zenodo.org/api/deposit/depositions/%s/actions/newversion"%(old_deposition_id),params=params)
+    if r.status_code > 400:
+        print(colored("Creating deposition entry with Zenodo Failed!", "red"))
+        print("Status Code: {}".format(r.status_code))
+        raise Exception
+
+    # Get new deposition id
+    new_deposition_id = r.json()['links']['latest_draft'].split('/')[-1]
+    
+    if deletelist[0] != 'No':
+        r = requests.get("https://zenodo.org/api/deposit/depositions/%s/files"%(new_deposition_id), params=params)
+        if r.status_code > 400:
+            print(colored("Could not fetch file details from latest version!", "red"))
+            print("Status Code: {}".format(r.status_code))
+            raise Exception
+        for _file in r.json():
+            if _file['filename'] in deletelist:
+                _link = _file['links']['self']
+                r = requests.delete(_link, params=params)
+
+    headers = {"Content-Type": "application/json"}
+    
+    r = requests.get('https://zenodo.org/api/deposit/depositions/%s' %(new_deposition_id),
+                     json={},
+                     params=params,
+                     headers=headers )
+
+    if r.status_code > 400:
+        print(colored("Creating deposition entry with Zenodo Failed!", "red"))
+        print("Status Code: {}".format(r.status_code))
+        raise Exception
+
+    bucket_url = r.json()["links"]["bucket"]
+    Doi = r.json()["metadata"]["prereserve_doi"]["doi"]
+    
+    # Upload new model files
+    path = model_path + '/%s' %(filename)
+    with open(path, 'rb') as fp:
+        r = requests.put("%s/%s" %(bucket_url, filename),
+                         data = fp,
+                         params = params)
+        if r.status_code > 400:
+            print(colored("Putting content to Zenodo Failed!", "red"))
+            print("Status Code: {}".format(r.status_code))
+            raise Exception
+
+    # Create Zenodo upload metadata
+    Author_Full_Information = [i for i in file['Author']]
+    Author_Information = []
+    for i in Author_Full_Information:
+        if 'affiliation' in i:
+            Author_Information.append({"name": i['name'],
+                                    "affiliation": i['affiliation']})
+        else:
+            Author_Information.append({"name": i['name']})
+
+    
+    data = { 'metadata' : {
+            'title': modelname,
+            'upload_type': 'dataset',
+            'description': file['Description'],
+            'creators': Author_Information,
+            'version': file['Model Version'],
+            'publication_date': datetime.datetime.today().strftime('%Y-%m-%d')         
+        }
+    }
+
+    r = requests.put('https://zenodo.org/api/deposit/depositions/%s' %(new_deposition_id),
+                     params=params,
+                     data=json.dumps(data),
+                     headers=headers)
+    
+    if r.status_code > 400:
+        print(colored("Creating deposition entry with Zenodo Failed!", "red"))
+        print("Status Code: {}".format(r.status_code))
+        raise Exception
+
+    file["Model Doi"] = Doi
+
+    # Use Zenodo page as Homepage if there's no homepage provided
+    if len(file['Model Homepage']) == 0:
+        file['Model Homepage'] = 'https://doi.org/' + Doi
+
+    '''    Create enriched metadata file    '''
+    newmetadataname = metadata_name.split('.')[0] + '.V' + file['Model Version'] + '.json' 
+
+    with open(newmetadataname,'w') as metadata:
+        json.dump(file,metadata,indent=2)
+
+    
+    '''    Upload to Github Repository    '''
+
+    f= open(newmetadataname).read()
+    GitHub_filename = 'Metadata/' + newmetadataname
+    myfork.create_file(GitHub_filename, 'Upload metadata for model: {}'.format(metadata_name.replace('.json', '')), f, branch='main')
+
+    if r.status_code == 200:
+        print('Now you can go to Zenodo to see your draft at Doi: %s, make some changes, and be ready to publish your model.'%colored(Doi, 'magenta'))
+        publish_command = raw_input('Do you want to publish your model and send your new enriched metadata file to GitHub repository UFOMetadata? ' + \
+                                    colored(' Yes', 'green') + ' or' + colored(' No', 'red') + ':')
+        if publish_command == 'Yes':
+            r = requests.post('https://zenodo.org/api/deposit/depositions/%s/actions/publish' %(new_deposition_id),
+                              params=params)
+            if r.status_code != 202:
+                print(colored("Publishing model with Zenodo Failed!", "red"))
+                print(r.json())
+                raise Exception
+            print('Your model has been successfully uploaded to Zenodo with DOI: %s' %(Doi))
+            print('You can access your model in Zenodo at: {}'.format(r.json()["links"]["record_html"]))
+            print('\n\n')
+        else:
+            print("You can publish your model by yourself. Then, please send your enriched metadata file to %s. I will help upload your metadata to GitHub Repository."%colored("thanoswang@163.com", "blue"))
+    else:
+        print("Your Zenodo upload Draft may have some problems. You can check your Draft on Zenodo and publish it by yourself. Then, please send your enriched metadata file to %s. I will help upload your metadata to GitHub Repository."%colored("thanoswang@163.com", "blue"))
+
+def newversion_all(all_models):
+
+    '''    Check if  Zenodo token works    '''
+    Zenodo_Access_Token = getpass('Please enter your Zenodo access token:')
+    params = {'access_token': Zenodo_Access_Token}
+    r = requests.get("https://zenodo.org/api/deposit/depositions", params=params)
+    if r.status_code > 400:
+        raise Exception(colored("URL connection with Zenodo Failed!", "red") + " Status Code: " + colored("{}".format(r.status_code), "red"))
+    print("Validating Zenodo access token: " + colored("PASSED!", "green"))
+    
+
+    '''    Check if Github token works    '''
+    Github_Access_Token = getpass('Please enter you Github access token:')
+    try:
+        g = Github(Github_Access_Token)
+        github_user = g.get_user()
+        # Get the public repo
+        repo = g.get_repo('Neubauer-Group/UFOMetadata')
+    except:
+        raise Exception(colored("Github access token cannot be validated", "red"))
+
+    print("Validating Github access token: " + colored("PASSED!", "green"))
+
+    
+    '''    Check if user's metadata repo is in sync with upstream    '''
+    # Create a fork in user's github
+    myfork = github_user.create_fork(repo)
+
+    # Check if the fork is up to date with main
+    if not is_parent(myfork.get_branch('main').commit,  repo.get_branch('main').commit): 
+        print(colored("Your fork of the UFOMetadata repo is out of sync from the upstream!", "red"))
+        print(colored("Please retry after syncing your local fork with upstream", "yellow"))
+        raise Exception
+
+    # Now put all models in zenodo and put their metadata in the local fork of metadata repo
+    base_path = os.getcwd()
+    for _path in all_models:
+        print("\nChecking Model: " + colored(_path, "magenta") + "\n")
+        os.chdir(_path)
+        newversion(model_path = os.getcwd(), myrepo= repo, myfork = myfork, params = params, depositions = r.json())
+        os.chdir(base_path)
+
+    # Pull Request from forked branch to original
+    username = g.get_user().login
+    body = 'Upload metadata for new model(s)'
+    pr = repo.create_pull(title="Upload metadata for a model's new version", body=body, head='{}:{}'.format(username,'main'), base='{}'.format('main'))
+    print('''
+    You have successfully uploaded your model(s) to Zenodo and created a pull request of your new enriched metadate files to GitHub repository''' + colored(' UFOMetadata', 'magenta') + '''. 
+    Your pull request to UFOMetadata will be checked by GitHub's CI workflow.
+    If your pull request failed or workflow doesn't start, please contact ''' +  colored('thanoswang@163.com/zijun4@illinois.edu' ,'blue')
+    )
+
+
+
+def githubupload(model_path, myrepo, myfork):
+    '''    Check for necessary files and their formats    '''
+    original_file = os.listdir(model_path)
+
+    assert 'metadata.json' in original_file, \
+        'Check if initial "metadata.json" exists: ' + colored('FAILED!', 'red')
+    try:
+        metadata = open('metadata.json')
+        file = json.load(metadata)
+    except:
+        raise Exception(colored('Check if initial "metadata.json" is correctly formatted: ') + colored('FAILED!', 'red'))
+    print('Check if initial "metadata.json" exists and correctly formatted: ' + colored('PASSED!', 'green'))
+
+    '''    Check existing model DOI    '''
+    try:
+        assert file['Model Doi']
+    except:
+        raise Exception(colored('"Model Doi" field does not exist in metadata', 'red'))
+    
+    try:
+        assert 'zenodo' in file['Model Doi']
+    except:
+        raise Exception(colored('We suggest you to upload your model to Zenodo', 'red'))
+
+
+    url = 'https://doi.org/' + file['Model Doi']
+    existing_model_webpage = requests.get(url)
+    try:
+        assert existing_model_webpage.status_code < 400
+    except:
+        raise Exception(colored('We cannot find your model page with your provided existing model doi.', 'red'))
+
+    
+    '''    Generate the metadata for the model   '''
+    file, filename, modelname, metadata_name = metadatamaker(model_path, create_file=False)
+
+    # Use Zenodo page as Homepage if there's no homepage provided
+    if len(file['Model Homepage']) == 0:
+        file['Model Homepage'] = 'https://doi.org/' + file['Model Doi']
+
+    with open(metadata_name,'w') as metadata:
+        json.dump(file,metadata,indent=2)
+
+    '''Check metadata file name'''
+    Allmetadata = myrepo.get_contents('Metadata')
+    Allmetadataname = [i.name for i in Allmetadata]
+    while True:
+        if metadata_name in Allmetadataname:
+            url = 'https://raw.githubusercontent.com/Neubauer-Group/UFOMetadata/main/Metadata/'
+            url += metadata_name
+            metadata = requests.get(url)
+            open(metadata_name,'wb').write(metadata.content)
+            with open(metadata_name,encoding='utf-8') as metadata:
+                file = json.load(metadata)
+            DOI = file['Model Doi']
+            print('Your metadata file name has been used. Please check the model with DOI: ' + colored(DOI, 'red') + ' in Zenodo.')
+            os.remove(metadata_name)
+            continuecommand = raw_input('Do you want to continue your upload?' + \
+                                    colored(' Yes', 'green') + ' or' + colored(' No', 'red') + ':')
+            if continuecommand == 'Yes':
+                while True:
+                    metadata_name = raw_input('Please rename your metadata file:').replace(' ','_')
+                    try:
+                        assert metadata_name.endswith('.json')
+                        break
+                    except:
+                        print('Your metadata file name should end with ' + colored('.json', 'red') + '.')
+            else:
+                sys.exit()
+        else:
+            break
+
+    '''    Upload to Github Repository    '''
+
+
+    # Create new metadata file in the forked repo
+    f= open(metadata_name).read()
+    GitHub_filename = 'Metadata/' + metadata_name
+    myfork.create_file(GitHub_filename, 'Upload metadata for model: {}'.format(metadata_name.replace('.json', '')), f, branch='main')
+
+
+def githubupload_all(all_models):
+
+    '''    Check if Github token works    '''
+    Github_Access_Token = getpass('Please enter you Github access token:')
+    try:
+        g = Github(Github_Access_Token)
+        github_user = g.get_user()
+        # Get the public repo
+        repo = g.get_repo('Neubauer-Group/UFOMetadata')
+    except:
+        raise Exception(colored("Github access token cannot be validated", "red"))
+
+    print("Validating Github access token: " + colored("PASSED!", "green"))
+
+    
+    '''    Check if user's metadata repo is in sync with upstream    '''
+    # Create a fork in user's github
+    myfork = github_user.create_fork(repo)
+
+    # Check if the fork is up to date with main
+    if not is_parent(myfork.get_branch('main').commit,  repo.get_branch('main').commit): 
+        print(colored("Your fork of the UFOMetadata repo is out of sync from the upstream!", "red"))
+        print(colored("Please retry after syncing your local fork with upstream", "yellow"))
+        raise Exception
+
+    # Now put all models in zenodo and put their metadata in the local fork of metadata repo
+    base_path = os.getcwd()
+    for _path in all_models:
+        print("\nChecking Model: " + colored(_path, "magenta") + "\n")
+        os.chdir(_path)
+        githubupload(model_path = os.getcwd(), myrepo= repo, myfork = myfork)
+        os.chdir(base_path)
+
+    # Pull Request from forked branch to original
+    username = g.get_user().login
+    body = 'Upload metadata for new model(s)'
+    pr = repo.create_pull(title="Upload metadata for a new model", body=body, head='{}:{}'.format(username,'main'), base='{}'.format('main'))
+    print('''
+    You have successfully upload your model(s) to Zenodo and created a pull request of your new enriched metadate files to GitHub repository''' + colored(' UFOMetadata', 'magenta') + '''. 
+    Your pull request to UFOMetadata will be checked by GitHub's CI workflow.
+    If your pull request failed or workflow doesn't start, please contact ''' +  colored('thanoswang@163.com/zijun4@illinois.edu' ,'blue')
+    )
+
+def UFOUpload(command,modelpath):
+    with open(modelpath) as f:
+        all_models = [line.strip() for line in f.readlines() if not line.strip().startswith('#')]
+    if command == 'Validation check':
+        validator_all(all_models=all_models)
+    elif command == 'Generate metadata':
+        metadatamaker_all(all_models=all_models)
+    elif command == 'Upload model':
+        uploader_all(all_models=all_models)
+    elif command == 'Update new version':
+        newversion_all(all_models=all_models)
+    elif command == 'Upload metadata to GitHub':
+        githubupload_all(all_models=all_models)
+    else:
+        print('Wrong command! Please choose from ["Validation check", "Generate metadata", "Upload model", "Updata new version", "Upload metadata to GitHub"].')
+
+if __name__ == '__main__':
+    FUNCTION_MAP = {'Validation check' : validator_all,
+                    'Generate metadata' : metadatamaker_all,
+                    'Upload model': uploader_all,
+                    'Update new version' : newversion_all,
+                    'Upload metadata to GitHub': githubupload_all}
+
+    parser = argparse.ArgumentParser()
+    parser.add_argument('command', choices=FUNCTION_MAP.keys())
+    args = parser.parse_args()
+    RunFunction = FUNCTION_MAP[args.command]
+
+    TXT = raw_input('Please enter the path to a text file with the list of all UFO models:')
+    with open(TXT) as f:
+        all_models = [line.strip() for line in f.readlines() if not line.strip().startswith('#')]
+    RunFunction(all_models = all_models)
diff --git a/UFOManager-2.0.0/UFOManager/__init__.py b/UFOManager-2.0.0/UFOManager/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..14ed5f85742c37ebb7136e6eb876a9ce7eb3b889
--- /dev/null
+++ b/UFOManager-2.0.0/UFOManager/__init__.py
@@ -0,0 +1,22 @@
+__author__ ='Mark Neubauer, Avik Roy, Zijun Wang'
+__all__=[
+    'validator',
+    'validator_all',
+    'metadatamaker',
+    'metadatamaker_all',
+    'is_parent',
+    'uploader',
+    'uploader_all',
+    'newversion',
+    'newversion_all',
+    'githubupload',
+    'githubupload_all',
+    'Upload',
+    'AccessGitRepo',
+    'Display',
+    'Search',
+    'Downloader',
+    'Search_Download',
+    'Delete',
+    'Download'
+]
diff --git a/UFOManager-2.0.0/metadata.json b/UFOManager-2.0.0/metadata.json
new file mode 100644
index 0000000000000000000000000000000000000000..a04d02bb146d60b4eae668b0862c8e2c3e4518d0
--- /dev/null
+++ b/UFOManager-2.0.0/metadata.json
@@ -0,0 +1,21 @@
+{
+    "Author": [
+        {"name" : "NAME",
+         "affiliation(optional)": "AFFILIATION",
+         "contact(optional)": "contact email address",
+         "Comment (Don't include)": "Affiliation and contact are optional, but at least one contact is needed."},
+        {"name" : "NAME",
+         "affiliation(optional)": "AFFILIATION",
+         "contact(optional)": "contact email address"}
+    ],
+    "Paper_id": {
+        "doi" : "Digital Object Identifier",
+        "arXiv" : "arXiv Identifier",
+        "comment (Don't need to include)": "At least one of doi or arXiv identifier must be given"
+    },
+    "Model Homepage (Optional)": "Link to the official homepage of your model. We will use the Zenodo page link if you choose to upload your model to Zenodo but do not provide this value.",
+    "Description": "Write your description of your model here.",
+    "Existing Model Doi(if you are using 'Update new version')": "Zenodo-issued concept-DOI for your model",
+    "Model Doi(if you are using 'Upload metadata to GitHub')": "Zenodo DOI of your model"
+}
+
diff --git a/UFOManager-2.0.0/requirements_Python2.txt b/UFOManager-2.0.0/requirements_Python2.txt
new file mode 100644
index 0000000000000000000000000000000000000000..9b212f2e6edc74159056f13dd24e54e20937119c
--- /dev/null
+++ b/UFOManager-2.0.0/requirements_Python2.txt
@@ -0,0 +1,23 @@
+attrs==21.4.0
+certifi==2021.10.8
+chardet==4.0.0
+contextlib2==0.6.0.post1
+Deprecated==1.2.13
+enum34==1.1.10
+hepunits==2.1.3
+idna==2.10
+importlib-resources==3.3.1
+particle==0.16.3
+pathlib2==2.3.7.post1
+PyGithub==1.45
+PyJWT==1.7.1
+requests==2.27.1
+scandir==1.10.0
+singledispatch==3.7.0
+six==1.16.0
+termcolor==1.1.0
+typing==3.10.0.0
+typing-extensions==3.10.0.2
+urllib3==1.26.13
+wrapt==1.14.1
+zipp==1.2.0
diff --git a/UFOManager-2.0.0/requirements_Python3.txt b/UFOManager-2.0.0/requirements_Python3.txt
new file mode 100644
index 0000000000000000000000000000000000000000..b8de223786ebf190f7971fde432d86e1ab87e19d
--- /dev/null
+++ b/UFOManager-2.0.0/requirements_Python3.txt
@@ -0,0 +1,21 @@
+attrs==22.2.0
+certifi==2022.6.15
+cffi==1.15.1
+charset-normalizer==2.1.0
+Deprecated==1.2.13
+hepunits==2.3.1
+idna==3.3
+importlib-resources==5.10.2
+particle==0.21.1
+pycparser==2.21
+PyGithub==1.55
+PyJWT==2.4.0
+PyNaCl==1.5.0
+requests==2.28.1
+tabulate==0.9.0
+termcolor==1.1.0
+urllib3==1.26.10
+wget==3.2
+wrapt==1.14.1
+zenodo-get==1.3.4
+zipp==3.11.0