Provided by: xscreensaver-data-extra_4.23-4ubuntu8_i386 bug

NAME

       webcollage - decorate the screen with random images from the web

SYNOPSIS

       webcollage   [-display  host:display.screen]  [-root]  [-window-id  id]
       [-verbose] [-timeout secs] [-delay secs] [-background bg]  [-no-output]
       [-urls-only]  [-imagemap  filename-base]  [-size  WxH] [-opacity ratio]
       [-filter  command]   [-filter2   command]   [-http-proxy   host[:port]]
       [-dictionary dictionary-file] [-driftnet [cmd]]

DESCRIPTION

       The webcollage program pulls random image off of the World Wide Web and
       scatters them on the root window.  One satisfied customer described  it
       as "a nonstop pop culture brainbath."  This program finds its images by
       doing random web searches, and  extracting  images  from  the  returned
       pages.

       webcollage is written in perl(1) and requires Perl 5.

       It  will  be  an  order  of  magnitude  faster  if  you  also  have the
       webcollage-helper program installed (a GDK/JPEG image compositor),  but
       webcollage works without it as well.

       webcollage can be used in conjunction with the driftnet(1) program (the
       Unix equivalent of EtherPEG) to snoop images from traffic on your local
       subnet, instead of getting images from search engines.

OPTIONS

       webcollage accepts the following options:

       -root   Draw  on  the root window.  This option is mandatory, if output
               is being produced: drawing to a  window  other  than  the  root
               window is not yet supported.

               Images  are  placed  on  the  root  window  by using one of the
               xscreensaver-getimage(1),   chbg(1),    xv(1),    xli(1),    or
               xloadimage(1) programs (whichever is available.)

       -window-id id
               Draw  to  the  indicated window instead; this only works if the
               xscreensaver-getimage(1) program is installed.

       -verbose or -v
               Print diagnostics to stderr.  Multiple -v switches increase the
               amount  of  output.   -v will print out the URLs of the images,
               and where they were placed; -vv will print  out  any  warnings,
               and  all URLs being loaded; -vvv will print information on what
               URLs were rejected; and so on.

       -timeout seconds
               How long to wait for a URL to complete before giving up  on  it
               and moving on to the next one.  Default 30 seconds.

       -delay seconds
               How   long   to  sleep  between  images.   Default  2  seconds.
               (Remember that this program  probably  spends  a  lot  of  time
               waiting for the network.)

       -background color-or-ppm
               What  to  use  for the background onto which images are pasted.
               This may be a color name, a hexadecimal  RGB  specification  in
               the form ’#rrggbb’, or the name of a PPM file.

       -size WxH
               Normally,  the  output image will be made to be the size of the
               screen (or target window.)  This lets you specify  the  desired
               size.

       -opacity ratio
               How  transparently  to  paste  the  images  together,  with 0.0
               meaning "completely  transparent"  and  1.0  meaning  "opaque."
               Default   0.85.    A  value  of  around  0.3  will  produce  an
               interestingly blurry image after a while.

       -no-output
               If this option is specified, then  no  composite  output  image
               will   be   generated.   This  is  only  useful  when  used  in
               conjunction with -verbose.

       -urls-only
               If this option is specified, then  no  composite  output  image
               will  be  generated:  instead,  a  list  of  image URLs will be
               printed on stdout.

       -imagemap filename-base
               If this option is specified, then instead of writing  an  image
               to  the root window, two files will be created: "base.html" and
               "base.jpg".  The JPEG will be the collage; the HTML  file  will
               include  that  image, and an image-map making the sub-images be
               linked to the  pages  on  which  they  were  found  (just  like
               http://www.jwz.org/webcollage/.)

       -filter command
               Filter  all  source  images  through this command.  The command
               must take a PPM file on stdin, and write  a  new  PPM  file  to
               stdout.  One good choice for a filter would be:

                    webcollage -root -filter ’vidwhacker -stdin -stdout’

       -filter2 command
               Filter  the  composite image through this command.  The -filter
               option applies to the sub-images; the -filter2 applies  to  the
               final, full-screen image.

       -http-proxy host:port
               If  you  must go through a proxy to connect to the web, you can
               specify it  with  this  option,  or  with  the  $http_proxy  or
               $HTTP_PROXY environment variables.

       -dictionary file
               Webcollage  normally  looks at the system’s default spell-check
               dictionary to generate words to feed into the  search  engines.
               You can specify an alternate dictionary with this option.

               Note  that  by  default,  webcollage  searches for images using
               several different methods, not all of which involve  dictionary
               words,  so  using  a  "topical"  dictionary  file  will not, in
               itself, be as effective as you might be hoping.

       -driftnet [ args ]
               driftnet(1) is a program that snoops your  local  ethernet  for
               packets  that  look  like they might be image files.  It can be
               used in conjunction with webcollage to generate  a  collage  of
               what  other people on your network are looking at, instead of a
               search-engine collage.  If you have driftnet installed on  your
               $PATH, just use the -driftnet option.  You can also specify the
               location of the program like this:

                    -driftnet /path/to/driftnet

               or, you can provide extra arguments like this:

                    -driftnet ’/path/to/driftnet -extra -args’

               Driftnet version 0.1.5 or later is  required.   Note  that  the
               driftnet  program  requires root access, so you’ll have to make
               driftnet be setuid-root for  this  to  work.   Please  exercise
               caution.

ENVIRONMENT

       DISPLAY to get the default host and display number.

       XENVIRONMENT
               to  get  the  name of a resource file that overrides the global
               resources stored in the RESOURCE_MANAGER property.

       http_proxy or HTTP_PROXY
               to get the default HTTP proxy host and port.

FILES AND URLS

       /usr/dict/words, /usr/share/lib/dict/words, or /usr/share/dict/words to
       find the random words to feed to certain search engines.

           http://www.altavista.com/image/randomlink,
           http://random.yahoo.com/fast/ryl,
           http://www.livejournal.com/stats/latest-img.bml, and
           http://www.google.com/ to find random web pages.

BOOBIES

       The Internet being what it is, absolutely anything might show up in the
       collage including -- quite possibly -- pornography, or even nudity.

BUGS

       Animating GIFs are not supported: only the first frame will be used.

UPGRADES

       The  latest  version  of  webcollage  can  be  found  as  a   part   of
       xscreensaver, at http://www.jwz.org/xscreensaver/, or on the WebCollage
       page at http://www.jwz.org/webcollage/.

       DriftNet: http://www.ex-parrot.com/~chris/driftnet/

SEE ALSO

       X(1),  xscreensaver(1),  xli(1),  xv(1),   xloadimage(1),   ppmmake(1),
       giftopnm(1), pnmpaste(1), pnmscale(1), djpeg(1), cjpeg(1), xdpyinfo(1),
       perl(1), vidwhacker(1), dadadodo(1), driftnet(1) EtherPEG, EtherPeek

COPYRIGHT

       Copyright © 1998-2005 by Jamie  Zawinski.   Permission  to  use,  copy,
       modify,  distribute,  and  sell this software and its documentation for
       any purpose is hereby granted without  fee,  provided  that  the  above
       copyright  notice  appear  in  all  copies and that both that copyright
       notice and this permission notice appear in  supporting  documentation.
       No  representations are made about the suitability of this software for
       any purpose.  It  is  provided  "as  is"  without  express  or  implied
       warranty.

AUTHOR

       Jamie Zawinski <jwz@jwz.org>, 24-May-1998.