How to make sure that my AJAX requests are originating from the same server in Python How to make sure that my AJAX requests are originating from the same server in Python python python

How to make sure that my AJAX requests are originating from the same server in Python


Depending on your infrastructure @dragonx's answer might interest you most.

my 2c

You want to make sure that only if a client visits your website can use the api? Hmm does the bot, robot, crawler fall in the same category with the client then? Or am I wrong? This can be easily exploited in case you really want to secure it really.

I cannot believe I'm the only person with this requirement.

Maybe not, but as you can see you are prone to several attacks to your API and that can be a reason for someone not sharing your design and making security stricter with auth.

EDIT

Since we are talking about AJAX requests what does the IP part has to do with this? The IP will always be the Client's IP! So probably, you want a public API...

  • I would Go with the tokens/session/cookie part.

  • I 'd go with a generated token that lasts a little while and a flow described below.

  • I'd go with a limiter per some time, like Github does. Eg 60 requests per hour per ip or more for registered users

To overcome the problem with the refreshing token I would just do this:

  1. Client visits the site

    -> server generates API TOKEN INIT

    -> Client gets API TOKEN INIT which is valid only for starting 1 request.

  2. Client makes AJAX Request to API

    -> Client uses API TOKEN INIT

    -> Server checks against API TOKEN INIT and limits

    -> Server accepts request

    -> Server passes back API TOKEN

    -> Client consumes response data and stores API TOKEN for further usage (Will be stored in browser memory via JS)

  3. Client Starts Comm with the API for a limited amount of time or requests. Notice that you know also the init token date so you can use it to check against the 1st visit on the page.

The 1st token is generated via the server when the client visits.Then the client uses that token in order to obtain a real one, that lasts for some time or something else as of limitation. This makes someone actually visit the webpage and then he can access the API for a limit amount of time, requests perhaps etc.

This way you don't need refreshing.

Of course the above scenario could be simplified with only one token and a time limit as mentioned above.

Of course the above scenario is prone to advanced crawlers, etc since you have no authentication.

Of course a clever attacker can grab tokens from server and repeat the steps but, then you already had that that problem from start.

Some extra points

  • As the comments provided please close writes to the API. You don't want to be a victim of DOS attacks with writes if you have doubts about your implementation(if not use auth) or for extra security
  • The token scenario as described above can also become more complicated eg by constantly exchanging tokens

Just for reference GAE Cloud storage uses signed_urls for kind of the same purpose.

Hope it helps.

PS. regarding IP spoofing and Defense against spoofing attacks wikipedia says so packet's won't be returned to the attacker:

Some upper layer protocols provide their own defense against IP spoofing attacks. For example, Transmission Control Protocol (TCP) uses sequence numbers negotiated with the remote machine to ensure that arriving packets are part of an established connection. Since the attacker normally can't see any reply packets, the sequence number must be guessed in order to hijack the connection. The poor implementation in many older operating systems and network devices, however, means that TCP sequence numbers can be predicted.


If it's purely the same server, you can verify requests against 127.0.0.1 or localhost.

Otherwise the solution is probably at the network level, to have a separate private subnet that you can check against. It should be difficult for an attacker to spoof your subnet without being on your subnet.


I guess you're a bit confused (or I am, please correct me). That your JS code is published on the same server as your API does not mean AJAX requests will come from your server. The clients download the JS from your server and execute it, which results in requests to your API sent from the clients, not from the same server.

Now if the above scenario correctly describes your case, what you are probably trying to do is to protect your API from bot scraping. The easiest protection is CAPTCHA, and you can find some more ideas on the Wiki page.

If you are concerned that other sites may make AJAX calls to your API to copy your site functionality, you shouldn't be--AJAX requests can only be sent to the same server as the page the JS is running on, unless it is JSONP.