What is the most accurate method in python for computing the minimum norm solution or the solution obtained from the pseudo-inverse? What is the most accurate method in python for computing the minimum norm solution or the solution obtained from the pseudo-inverse? numpy numpy

What is the most accurate method in python for computing the minimum norm solution or the solution obtained from the pseudo-inverse?


My area of research involves a compression algorithm essentially called the Fourier extensions. What is the most accurate? It is highly dependent on the vector I believe due to properties of smoothness. During the summer I used something called Savitsky Golay. There are fairly numerically stable and accurate ways of filtering this. However, my adviser has a method that is relatively fast and numerically stable. The area is called Fourier extension or continuation. How? I don't know if I am allowed to post it, here is the article. If I believe I may have already posted in the summer on here in python partially.

This has nothing to do with python because python uses the same underlying libraries as most numerical coding schemes which is BLAS and LAPACK. Netlib is online.

There are a number of other similar fast and numerically stable ideas that may be suitable I would recommend. There is an entire book devoted to this by Boyd. Chapters 6 and 7 are on this. It is about total variation with regularization due to the underlying noise you may have in the signal I imagine.

Other aspects. You may need to regularize the SVD due to ill-conditioning. There are books devoted to this usually. Simply to answer your question what is best algorithm. There are multiple dimensions to algorithms and you haven't really stated the properties of the problem. If you didn't know about Runge's phenomenon. That is using high degree polynomials is adverse.

There is a whole class of Hermite polynomials to deal with the Gibbs phenomena and other filtering techniques but this isn't posed well. You're using generic functions. I'd recommend getting Trefthen and Bau. Sometimes they do a Chebychev Reprojection.

What is the condition number of K. Additionally there is something that happens when fitting polynomials called Runges phenomena. You should consider this. Use generic functions you need to do a low rank approximation to regularize if the condition number is too high. I actually just read it. You are using a Vandermonde matrix. I am going to demonstrate this pretty easily. Vandermonde matrices are bad. Don't use them. They have knots.

v = (1:.5:6);V = vander(v);c1 = cond(V)v2 = (1:.5:12);c2 = cond(vander(v2));display(c1)display(c2)

c1 =

6.0469e+12

c2 =

9.3987e+32

I attempted a low rank approximation but the vandermonde matrices are not nice. See.

function B = lowapprox(A)% Takes a matrix A% Returns a low rank approx of it% utilizing the SVDchi = 1e-10;[U,S,V]  = svd(A,0);DS = diag(S);aa = find(DS > chi);s= S(aa,aa);k = length(aa);Un = U(:,1:k);Vn = V(:,1:k)';B = Un*s*Vn;endV2  = vander(v2);r2 = rank(V2);c2=cond(V2);B = lowapprox(V2);c3 = cond(B);display(c3)c2 =   9.3987e+32c3 =   3.7837e+32

does nothing really...If you don't know what is happening when you get that inverse the condition number is equal to maximum singular value over the minimum so you have some extremely small singular values at machine precision.

Additionally, I think you have some confusion about minimum norm and regularization. You said you want a minimum norm in the least squares sense. The SVD gives the least squares.It's property nine, A is constructed from a basis by the SVD. This is covered in trefethen but the vandermonde matrix is ill-conditioned.

even small ill constructed vandermonde matrices are going to lose it. Now about your about solution. Don't use vandermonde matrices. Construct the polynomial otherwise. A better idea is barycentric lagrange interpolation. A library is here

Here is an example in matlab.

t= (0:.01:pi)';f = cos(t);data = [t,f];f1 = barylag(data,t)display(err =norm(f1-f1))err =     0

barylag is taken from the matlab website. As I can't really comment on your discrepancies you should realize the actual way lsqr is done. Lsqr algorithms are Krylov methods. This is covered in Trefethen. SVD is also I have an example on my quora page about numerical stability with the QR which is how you actually construct these algorithms