STORM: Ultimate AI Research Tool Revolutionizes Writing

STORM (Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking) is an innovative knowledge curation system powered by Large Language Models (LLMs). Developed by a team of researchers from Stanford University, STORM is designed to revolutionize the way we approach topic research and report generation. This groundbreaking project can autonomously research a given subject from scratch and produce a comprehensive, fully-cited report, marking a significant advancement in artificial intelligence and knowledge synthesis.

Project Overview

STORM was created by a collaborative effort of researchers including Yijia Shao, Yucheng Jiang, Theodore A. Kanell, Peter Xu, Omar Khattab, and Monica S. Lam. Their work on STORM has gained recognition in the academic community, with a paper on the system accepted for presentation at the prestigious NAACL 2024 conference.

How STORM Works

The STORM system breaks down the process of generating long-form articles with citations into two distinct phases:

1. Pre-writing Phase

During this initial stage, STORM conducts extensive Internet-based research to gather relevant references and information on the given topic. Using this collected data, the system then generates a comprehensive outline for the article.

2. Writing Phase

In the second stage, STORM utilizes the outline and references compiled during the pre-writing phase to craft a full-length article, complete with proper citations.

Key Features and Innovations

STORM’s approach to automated research and writing is built on several innovative features:

  1. Multi-perspective Research: The system is designed to explore topics from various angles, ensuring a well-rounded and comprehensive analysis.
  2. Intelligent Question Formulation: STORM employs advanced algorithms to generate relevant and insightful questions about the topic, driving deeper research.
  3. Adaptive Learning: As the system gathers information, it continuously refines its understanding of the topic, allowing for more nuanced and accurate content generation.
  4. Automatic Citation: STORM integrates citations seamlessly into the generated content, maintaining academic integrity and providing readers with sources for further exploration.

Setting Up STORM

For those interested in running STORM locally to reproduce experiments or explore its capabilities, follow these steps:

  1. Environment Setup:
   conda create -n storm python=3.11
   conda activate storm
   pip install -r requirements.txt
  1. API Configuration:
    Create a file named secrets.toml in the root directory with the following content:
   # OpenAI API Key
   OPENAI_API_KEY=<your_openai_api_key>

   # Specify API type (OpenAI or Azure)
   OPENAI_API_TYPE="openai"  # or "azure"

   # If using Azure, include these lines:
   # AZURE_API_BASE=<your_azure_api_base_url>
   # AZURE_API_VERSION=<your_azure_api_version>

   # You.com Search API Key
   YDC_API_KEY=<your_youcom_api_key>

Running Experiments

STORM offers flexibility in running experiments, either on a dataset or for individual topics:

Pre-writing Phase

  • For batch experiments on the FreshWiki dataset:
  python -m scripts.run_prewriting --input-source file --input-path ../FreshWiki/topic_list.csv --engine gpt-4 --do-research --max-conv-turn 5 --max-perspective 5
  • For single topic experiments:
  python -m scripts.run_prewriting --input-source console --engine gpt-4 --max-conv-turn 5 --max-perspective 5 --do-research

Writing Phase

  • For batch experiments on the FreshWiki dataset:
  python -m scripts.run_writing --input-source file --input-path ../FreshWiki/topic_list.csv --engine gpt-4 --do-polish-article --remove-duplicate
  • For single topic experiments:
  python -m scripts.run_writing --input-source console --engine gpt-4 --do-polish-article --remove-duplicate

Evaluation Metrics

The STORM team has developed automated evaluation scripts to assess the quality of both outlines and full articles generated by the system:

Outline Quality Evaluation

python eval_outline_quality.py --input-path ../FreshWiki/topic_list.csv --gt-dir ../FreshWiki --pred-dir ../results --pred-file-name storm_gen_outline.txt --result-output-path ../results/storm_outline_quality.csv

Full Article Quality Evaluation

python eval_article_quality.py --input-path ../FreshWiki/topic_list.csv --gt-dir ../FreshWiki --pred-dir ../results --gt-dir ../FreshWiki --output-dir ../results/storm_article_eval_results --pred-file-name storm_gen_article_polished.txt

Conclusion

STORM represents a significant leap forward in the field of automated knowledge curation and article generation. While the system’s output may require additional editing to meet publication standards, experienced Wikipedia editors have found STORM to be an invaluable tool in the pre-writing stage of article creation.

By leveraging the power of Large Language Models and innovative research methodologies, STORM opens up new possibilities for efficient, comprehensive, and well-cited content generation. As the project continues to evolve, it has the potential to revolutionize how we approach research and writing tasks across various domains.

For the most up-to-date information on STORM’s features and capabilities, interested users are encouraged to visit the official GitHub repository. As with any rapidly developing technology, the project may have undergone updates or changes since the time of this writing.

Categories: GitHub
X