fast shell find fast shell find shell shell

fast shell find


You can use the audit subsystem to monitor the creation and deletion of files. Combining this with an initial run of find should allow you to create a database of files that you can update in realtime.


Divide and Conquer ? assuming a MP os and processorspawn multiple find commands for each subfolder.

for dir in /myDirWithThausandsofDirectories/*do find "$dir" -name "*.suffix" &done

depending on the number of subdirs you may want to control how many processes (find commands) run at a given time. That will be a bit trickier, but doable (ie using a bash shell, keep an array with the pids of the spawned processes $! and only allow new ones, depending on the length of the array). Also the above doesn't search for files under the root directory, just a quick example of the idea.

If you don't know how to process management is done, time to learn ;) This is a really good text on the subject. This is what you need actually. But read the whole thing to understand how it works.


Since you're using a simple glob you might be able to use Bash's recursive globbing. Example:

shopt -s globstarfor path in /etc/**/**.confdo    echo "$path"done

Might be faster, since it's using an internal shell capability with much less flexibility than find.

If you can't use Bash, but you have a limit to the path depth, you can explicitly list the different depths:

for path in /etc/*/*.conf /etc/*/*/*.conf /etc/*/*/*/*.confdo    echo "$path"done