How to Import NSE Option Chain Data into Python

NSE (National Stock Exchange of India) publishes a live option chain — a structured table of call and put options across different strike prices and expiry dates for index and stock derivatives. Pulling that data into Python opens the door to algorithmic analysis, backtesting, and real-time monitoring. But the method you use, and how reliable it turns out to be, depends heavily on your technical setup and goals.

What Is the NSE Option Chain and Why Import It?

The NSE option chain shows open interest (OI), volume, implied volatility (IV), bid/ask prices, and last traded price (LTP) for every available strike across a selected underlying — like NIFTY or BANKNIFTY.

Analysts import this data into Python to:

  • Track changes in open interest to gauge market sentiment
  • Build custom screeners for unusual activity
  • Calculate put-call ratio (PCR) programmatically
  • Feed options data into pricing models like Black-Scholes

The data is public and updated in real time on the NSE website, which makes it accessible — but accessing it programmatically requires understanding how the site delivers it.

How NSE Delivers Option Chain Data

NSE's website is a JavaScript-rendered application. The option chain you see in your browser isn't embedded in a static HTML page — it's loaded dynamically via an internal API endpoint. This is important because tools that scrape raw HTML won't work here.

The underlying data comes from a JSON endpoint, which at the time of writing follows a pattern like:

https://www.nseindia.com/api/option-chain-indices?symbol=NIFTY 

This endpoint returns a structured JSON object containing all strike data, expiry dates, and Greeks.

Method 1: Using Python's requests Library with Session Headers 🐍

Because NSE uses cookies and user-agent validation to prevent automated scraping, a plain requests.get() call will return a 401 or redirect. You need to establish a browser-like session first.

import requests import pandas as pd headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)", "Accept-Language": "en-US,en;q=0.9", "Accept-Encoding": "gzip, deflate, br", "Referer": "https://www.nseindia.com/option-chain", } session = requests.Session() # Step 1: Hit the main site to get cookies session.get("https://www.nseindia.com", headers=headers, timeout=10) # Step 2: Hit the option chain page to load additional cookies session.get("https://www.nseindia.com/option-chain", headers=headers, timeout=10) # Step 3: Fetch the actual data url = "https://www.nseindia.com/api/option-chain-indices?symbol=NIFTY" response = session.get(url, headers=headers, timeout=10) data = response.json() 

Once you have the JSON, you can parse it into a DataFrame:

records = data['filtered']['data'] rows = [] for item in records: row = {} if 'CE' in item: row['CE_OI'] = item['CE'].get('openInterest') row['CE_LTP'] = item['CE'].get('lastPrice') if 'PE' in item: row['PE_OI'] = item['PE'].get('openInterest') row['PE_LTP'] = item['PE'].get('lastPrice') row['strikePrice'] = item.get('strikePrice') rows.append(row) df = pd.DataFrame(rows) print(df.head()) 

Method 2: Third-Party Libraries

Several Python packages abstract this process. nsepython and nsetools are commonly referenced in the Python-finance community. These handle session management internally and expose cleaner function-based APIs:

from nsepython import nse_optionchain_scrapper data = nse_optionchain_scrapper("NIFTY") 

The trade-off: third-party libraries depend on maintainers keeping pace with NSE's API changes. NSE has historically updated its endpoints without notice, which can break these libraries temporarily.

Key Variables That Affect Your Approach

FactorImpact
NSE API changesEndpoints and cookie behavior change; code may need updates
Request frequencyToo many calls in a short window can trigger rate limiting or IP blocks
Expiry selectionThe JSON includes all expiries; you'll need to filter by date
Underlying assetIndex options (NIFTY, BANKNIFTY) vs. stock options use slightly different endpoints
Python environmentLibrary availability varies; some packages lag on newer Python versions

Parsing and Filtering the Data

The raw JSON from NSE is deeply nested. Key paths to know:

  • data['records']['data'] — full dataset across all expiries
  • data['filtered']['data'] — data filtered to the nearest expiry by default
  • data['records']['expiryDates'] — list of available expiry dates
  • data['records']['underlyingValue'] — current spot price of the index

To filter by a specific expiry:

expiry_date = "27-Jun-2024" filtered = [ item for item in data['records']['data'] if item.get('expiryDate') == expiry_date ] 

Common Issues and What Causes Them 🔧

403 / 401 errors: Almost always a session or cookie problem. Make sure you're hitting the homepage and option chain page before calling the API endpoint.

Empty JSON responses: NSE returns data only during market hours and pre-open sessions. Outside those windows, some fields return null or empty arrays.

Stale data in third-party libraries: If you're using nsepython or similar, check the GitHub repository's last commit date. A library that hasn't been updated in six months may be working against a deprecated endpoint.

Rate limiting: If you're polling frequently for live monitoring, space out your requests and consider caching the session object rather than creating a new one each time.

What Determines the Right Approach for Your Use Case

The session-based requests method gives you full control and no library dependency — but you maintain it yourself when NSE changes anything. Third-party libraries lower the setup barrier but introduce dependency risk. Scheduled scraping on a server introduces IP considerations that a local notebook doesn't face.

Whether you need only the nearest weekly expiry or the full multi-expiry chain, whether you're running this once a day or every minute, and whether your environment supports the relevant packages — these are the factors that shape which implementation actually works for your situation.