Timing test on azure ml
I have a partial answer:1. no, it's abstracted
The following types of data can expand into larger datasets during feature normalization, and are limited to less than 10 GB:
Sparse Categorical Strings Binary data
(see this)
I'm not sure, but while working on it, I didn't experience any change when running a single experiment and multiple experiment
you can scale the machines in the standard tier (see this)
I would recommend looking at the new "Visual Interface" for Azure ML service, which allows you to go well over the 10gig limit and bring your own compute clusters.
//BUILD 2019 announcement video:https://www.youtube.com/watch?v=QBPCaZo9xx0