Reading input files by line using read command in shell scripting skips last line
read
reads until it finds a newline character or the end of file, and returns a non-zero exit code if it encounters an end-of-file. So it's quite possible for it to both read a line and return a non-zero exit code.
Consequently, the following code is not safe if the input might not be terminated by a newline:
while read LINE; do # do something with LINEdone
because the body of the while
won't be executed on the last line.
Technically speaking, a file not terminated with a newline is not a text file, and text tools may fail in odd ways on such a file. However, I'm always reluctant to fall back on that explanation.
One way to solve the problem is to test if what was read is non-empty (-n
):
while read -r LINE || [[ -n $LINE ]]; do # do something with LINEdone
Other solutions include using mapfile
to read the file into an array, piping the file through some utility which is guaranteed to terminate the last line properly (grep .
, for example, if you don't want to deal with blank lines), or doing the iterative processing with a tool like awk
(which is usually my preference).
Note that -r
is almost certainly needed in the read
builtin; it causes read
to not reinterpret \
-sequences in the input.
DONE=falseuntil $DONEdo read line || DONE=true echo $linedone < blah.txt
Use while loop like this:
while IFS= read -r line || [ -n "$line" ]; do echo "$line"done <file
Or using grep
with while loop:
while IFS= read -r line; do echo "$line"done < <(grep "" file)
Using grep .
instead of grep ""
will skip the empty lines.
Note:
Using
IFS=
keeps any line indentation intact.File without a newline at the end isn't a standard unix text file.