sorry to reply to my own message, but a quick update... I went to another xterm and tried get a second look at things. From there I was able to see and remove the rest of the files (no idea what happend to that one xterm to muck with ls/rm). Anyway, things are cleaned up, and that even seemed to fix things in the original xterm that was causing the problems... I may well still reboot and FSCK... Any idea what could have screwed up? I remember that there used to be a limit of 10,000 files/directory or inode. That is why I was originally concerned with having more than 2.5 times that in a single directory. EBo -- "John (EBo) David" wrote: > > ummm.... > > I have a unit and regression test suite for my ecological modeling > virtual machine. I needed to bump up one of the tests to run for a > longer time for model testing. Problem was that I forgot that I am > creating an image dump for *every* variable specified each and every > iteration... start_time=0, stop_time=25, dt=0.01... that is 2,500 images > for umm... looks like 8 variables, and there are 15 other unit tests... > > So now I find that I have over 25,000 files in a single directory. > oops. Ok, off to clean them up.... > > First, ls and rm complain that there are to many files to "rm *.pgm", so > I go though and delete them by group name... ok, appears to go ok. Now > I am finally able to "rm *.pgm" so they should be clean up. Problem is > that once I do that I still have hundreds of pgm files in the directory > that "ls" reports, but an "ls *meta_pop*" does not. I am affraid that I > have corrupted the file system or something. > > any suggestions? > > thoughts: > > shut down the machine, reboot single user, fsck ever partition > (including XFS partions), and recite some prayer to Boolean... > > other ideas, thoughts, intuitions as to what happens when creating 10's > of thousands of files in a single directory by accident? > > EBo --