How to isolate a single element from a scraped web page in R
These questions are very helpful when dealing with web scraping and XML in R:
- Scraping html tables into R data frames using the XML package
- How to transform XML data into a data.frame?
With regards to your particular example, while I'm not sure what you want the output to look like, this gets the "goals scored" as a character vector:
theURL <-"http://www.fifa.com/worldcup/archive/germany2006/results/matches/match=97410001/report.html"fifa.doc <- htmlParse(theURL)fifa <- xpathSApply(fifa.doc, "//*/div[@class='cont']", xmlValue)goals.scored <- grep("Goals scored", fifa, value=TRUE)
The xpathSApply
function gets all the values that match the given criteria, and returns them as a vector. Note how I'm looking for a div with class='cont'. Using class values is frequently a good way to parse an HTML document because they are good markers.
You can clean this up however you want:
> gsub("Goals scored", "", strsplit(goals.scored, ", ")[[1]])[1] "Philipp LAHM (GER) 6'" "Paulo WANCHOPE (CRC) 12'" "Miroslav KLOSE (GER) 17'" "Miroslav KLOSE (GER) 61'" "Paulo WANCHOPE (CRC) 73'"[6] "Torsten FRINGS (GER) 87'"