Random state (Pseudo-random number) in Scikit learn Random state (Pseudo-random number) in Scikit learn python python

Random state (Pseudo-random number) in Scikit learn


train_test_split splits arrays or matrices into random train and test subsets. That means that everytime you run it without specifying random_state, you will get a different result, this is expected behavior. For example:

Run 1:

>>> a, b = np.arange(10).reshape((5, 2)), range(5)>>> train_test_split(a, b)[array([[6, 7],        [8, 9],        [4, 5]]), array([[2, 3],        [0, 1]]), [3, 4, 2], [1, 0]]

Run 2

>>> train_test_split(a, b)[array([[8, 9],        [4, 5],        [0, 1]]), array([[6, 7],        [2, 3]]), [4, 2, 0], [3, 1]]

It changes. On the other hand if you use random_state=some_number, then you can guarantee that the output of Run 1 will be equal to the output of Run 2, i.e. your split will be always the same. It doesn't matter what the actual random_state number is 42, 0, 21, ... The important thing is that everytime you use 42, you will always get the same output the first time you make the split.This is useful if you want reproducible results, for example in the documentation, so that everybody can consistently see the same numbers when they run the examples. In practice I would say, you should set the random_state to some fixed number while you test stuff, but then remove it in production if you really need a random (and not a fixed) split.

Regarding your second question, a pseudo-random number generator is a number generator that generates almost truly random numbers. Why they are not truly random is out of the scope of this question and probably won't matter in your case, you can take a look here form more details.


If you don't specify the random_state in your code, then every time you run(execute) your code a new random value is generated and the train and test datasets would have different values each time.

However, if a fixed value is assigned like random_state = 42 then no matter how many times you execute your code the result would be the same .i.e, same values in train and test datasets.


If you don't mention the random_state in the code, then whenever you execute your code a new random value is generated and the train and test datasets would have different values each time.

However, if you use a particular value for random_state(random_state = 1 or any other value) everytime the result will be same,i.e, same values in train and test datasets.Refer below code:

import pandas as pd from sklearn.model_selection import train_test_splittest_series = pd.Series(range(100))size30split = train_test_split(test_series,random_state = 1,test_size = .3)size25split = train_test_split(test_series,random_state = 1,test_size = .25)common = [element for element in size25split[0] if element in size30split[0]]print(len(common))

Doesn't matter how many times you run the code, the output will be 70.

70

Try to remove the random_state and run the code.

import pandas as pd from sklearn.model_selection import train_test_splittest_series = pd.Series(range(100))size30split = train_test_split(test_series,test_size = .3)size25split = train_test_split(test_series,test_size = .25)common = [element for element in size25split[0] if element in size30split[0]]print(len(common))

Now here output will be different each time you execute the code.