Curl gives response but python does not and the request call does not terminate? Curl gives response but python does not and the request call does not terminate? curl curl

Curl gives response but python does not and the request call does not terminate?


You have to jump through some loops with Python to get the file you're after. Mainly, you need to get the request header cookie part right, otherwise you'll keep getting 401 code.

First, you need to get the regular cookies from the authority www.nseindia.com. Then, you need to get the bm_sv cookie from the https://www.nseindia.com/json/quotes/equity-historical.json. Finally, add something that's called nseQuoteSymbols.

Glue all that together and make the request to get the file.

Here's how:

from urllib.parse import urlencodeimport requestsheaders = {    'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) '                  'AppleWebKit/537.36 (KHTML, like Gecko) '                  'Chrome/88.0.4324.182 Safari/537.36',    'x-requested-with': 'XMLHttpRequest',    'referer': 'https://www.nseindia.com/get-quotes/equity?symbol=COALINDIA',}payload = {    "symbol": "COALINDIA",    "series": '["EQ"]',    "from": "04-04-2021",    "to": "04-05-2021",    "csv": "true",}api_endpoint = "https://www.nseindia.com/api/historical/cm/equity?"nseQuoteSymbols = 'nseQuoteSymbols=[{"symbol":"COALINDIA","identifier":null,"type":"equity"}]; 'def make_cookies(cookie_dict: dict) -> str:    return "; ".join(f"{k}={v}" for k, v in cookie_dict.items())with requests.Session() as connection:    authority = connection.get("https://www.nseindia.com", headers=headers)    historical_json = connection.get("https://www.nseindia.com/json/quotes/equity-historical.json", headers=headers)    bm_sv_string = make_cookies(historical_json.cookies.get_dict())    cookies = make_cookies(authority.cookies.get_dict()) + nseQuoteSymbols + bm_sv_string    connection.headers.update({**headers, **{"cookie": cookies}})    the_real_slim_shady = connection.get(f"{api_endpoint}{urlencode(payload)}")    csv_file = the_real_slim_shady.headers["Content-disposition"].split("=")[-1]    with open(csv_file, "wb") as f:        f.write(the_real_slim_shady.content)

Output -> a .csv file that looks like this:

enter image description here