bash - how do I disable stdout redirection-to-file buffering in perl? -


here's script launchs 10 processes, each writing 100,000 lines stdout, inherited parent:

#!/usr/bin/env perl # buffer.pl use 5.10.0; use strict; use warnings fatal => "all"; use autodie;  use parallel::forkmanager; $pm = parallel::forkmanager->new(4);  $|=1; # don't think syswrite...  # start 10 jobs write 100,000 lines each (1 .. 10 ) {     $pm->start , next;      $j (1 .. 100_000) {         syswrite(\*stdout,"$j\n");     }      $pm->finish; } $pm->wait_all_children; 

if pipe process, well..

$ perl buffering.pl | wc -l 1000000 

but if pipe disk, syswrites clobber each other.

$ perl buffering.pl > tmp.txt ; wc -l tmp.txt 457584 tmp.txt 

what's more, if open write-file handles in child processes , write directly tmp.txt:

#!/usr/bin/env perl # buffering2.pl use 5.10.0; use strict; use warnings fatal => "all"; use autodie;  use parallel::forkmanager; $pm = parallel::forkmanager->new(4);  $|=1;  (1 .. 10) {     $pm->start , next;     open $fh, '>', 'tmp.txt';      $j (1 .. 100_000) {         syswrite($fh,"$j\n");     }     close $fh;      $pm->finish; } $pm->wait_all_children; 

tmp.txt has 1,000,000 lines expected.

$ perl buffering2.pl; wc -l tmp.txt 100000 tmp.txt 

so redirection via '>' disk has sort of buffering redirection process doesn't? what's deal?

when redirect whole perl script 1 file descriptor (created shell when > tmp.txt , inherited stdout perl) dup'd each child. when explicitly open in each child different file descriptors (not dups of original). should able replicate shell redirection case if hoist open $fh, '>', 'tmp.txt' out of loop.

the pipe case works because you're talking pipe , not file , has no notion of offset can inadvertently shared in kernel described above.


Comments

Popular posts from this blog

c# - SharpSVN - How to get the previous revision? -

c++ - Is it possible to compile a VST on linux? -

url - Querystring manipulation of email Address in PHP -