Provided by: biber_1.8-1_all
NAME
Biber - main module for biber, a bibtex replacement for users of biblatex
SYNOPSIS
use Biber; my $biber = Biber->new(); $biber->parse_ctrlfile("example.bcf"); $biber->prepare;
METHODS
new Initialize the Biber object, optionally passing named options as arguments. display_problems Output summary of warnings/errors before exit biber_tempdir my $sections= $biber->biber_tempdir Returns a File::Temp directory object for use in various things sections my $sections= $biber->sections Returns a Biber::Sections object describing the bibliography sections add_sections Adds a Biber::Sections object. Used externally from, e.g. biber sortlists my $sortlists= $biber->sortlists Returns a Biber::SortLists object describing the bibliography sorting lists set_output_obj Sets the object used to output final results Must be a subclass of Biber::Output::base get_preamble Returns the current preamble as an array ref get_output_obj Returns the object used to output final results set_current_section Sets the current section number that we are working on to a section number get_current_section Gets the current section number that we are working on tool_mode_setup Fakes parts of the control file for tool mode parse_ctrlfile This method reads the control file generated by biblatex to work out the various biblatex options. See Constants.pm for defaults and example of the data structure being built here. process_setup Place to put misc pre-processing things needed later process_setup_tool Place to put misc pre-processing things needed later for tool mode resolve_alias_refs Resolve aliases in xref/crossref/xdata which take keys as values to their real keys We use set_datafield as we are overriding the alias in the datasource process_citekey_aliases Remove citekey aliases from citekeys as they don't point to real entries. nullable_check Check entries for nullable fields instantiate_dynamic This instantiates any dynamic entries so that they are available for processing later on. This has to be done before most all other processing so that when we call $section->bibentry($key), as we do many times in the code, we don't die because there is a key but no Entry object. resolve_xdata Resolve xdata entries cite_setmembers Promotes set member to cited status process_interentry $biber->process_interentry This does several things: 1. Records the set information for use later 2. Ensures proper inheritance of data from cross-references. 3. Ensures that crossrefs/xrefs that are directly cited or cross-referenced at least mincrossrefs times are included in the bibliography. validate_datamodel Validate bib data according to a datamodel Note that we are validating the internal Biber::Entries after they have been created from the datasources so this is datasource neutral, as it should be. It is here to enforce adherence to what biblatex expects. process_entries_pre Main processing operations, to generate metadata and entry information This method is automatically called by C<prepare>. Here we generate the "namehash" and the strings for "labelname", "labelyear", "labelalpha", "sortstrings", etc. Runs prior to uniqueness processing process_entries_post More processing operations, to generate things which require uniqueness information like namehash Runs after uniqueness processing process_singletitle Track seen work combination for generation of singletitle process_extrayear Track labelname/year combination for generation of extrayear process_extratitle Track labelname/labeltitle combination for generation of extratitle process_extratitleyear Track labeltitle/labelyear combination for generation of extratitleyear process_sets Postprocess set entries Checks for common set errors and enforces 'dataonly' for set members process_labelname Generate labelname information. process_labeldate Generate labeldate information process_labeltitle Generate labeltitle Note that this is not conditionalised on the biblatex "labeltitle" as labeltitle should always be output since all standard styles need it. Only extratitle is conditionalised on the biblatex "labeltitle" option. process_fullhash Generate fullhash process_namehash Generate namehash process_pername_hashes Generate per_name_hashes process_visible_names Generate the visible name information. This is used in various places and it is useful to have it generated in one place. process_labelalpha Generate the labelalpha and also the variant for sorting process_extraalpha Generate the extraalpha information process_presort Put presort fields for an entry into the main Biber bltx state so that it is all available in the same place since this can be set per-type and globally too. process_lists Sort and filter lists for a section check_list_filter Run an entry through a list filter. Returns a boolean. generate_sortinfo Generate information for sorting uniqueness Generate the uniqueness information needed when creating .bbl create_uniquename_info Gather the uniquename information as we look through the names What is happening in here is the following: We are registering the number of occurences of each name, name+init and fullname within a specific context. For example, the context is "global" with uniquename < 5 and "name list" for uniquename=5 or 6. The keys we store to count this are the most specific information for the context, so, for uniquename < 5, this is the full name and for uniquename=5 or 6, this is the complete list of full names. These keys have values in a hash which are ignored. They serve only to accumulate repeated occurences with the context and we don't care about this and so the values are a useful sinkhole for such repetition. For example, if we find in the global context a lastname "Smith" in two different entries under the same form "Alan Smith", the data structure will look like: {Smith}->{global}->{Alan Smith} = 2 We don't care about the value as this means that there are 2 "Alan Smith"s in the global context which need disambiguating identically anyway. So, we just count the keys for the lastname "Smith" in the global context to see how ambiguous the lastname itself is. This would be "1" and so "Alan Smith" would get uniquename=0 because it's unambiguous as just "Smith". The same goes for "minimal" list context disambiguation for uniquename=5 or 6. For example, if we had the lastname "Smith" to disambiguate in two entries with labelname "John Smith and Alan Jones", the data structure would look like: {Smith}->{Smith+Jones}->{John Smith+Alan Jones} = 2 Again, counting the keys of the context for the lastname gives us "1" which means we have uniquename=0 for "John Smith" in both entries because it's the same list. This also works for repeated names in the same list "John Smith and Bert Smith". Disambiguating "Smith" in this: {Smith}->{Smith+Smith}->{John Smith+Bert Smith} = 2 So both "John Smith" and "Bert Smith" in this entry get uniquename=0 (of course, as long as there are no other "X Smith and Y Smith" entries where X != "John" or Y != "Bert"). generate_uniquename Generate the per-name uniquename values using the information harvested by create_uniquename_info() create_uniquelist_info Gather the uniquename information as we look through the names generate_uniquelist Generate the per-namelist uniquelist values using the information harvested by create_uniquelist_info() generate_extra Generate information for: * extraalpha * extrayear * extratitle * extratitleyear generate_singletitle Generate the singletitle field, if requested. The information for generating this is gathered in process_singletitle() sort_list Sort a list using information in entries according to a certain sorting scheme. Use a flag to skip info messages on first pass prepare Do the main work. Process and sort all entries before writing the output. prepare_tool Do the main work for tool mode fetch_data Fetch citekey and dependents data from section datasources Expects to find datasource packages named: Biber::Input::<type>::<datatype> and one defined subroutine called: Biber::Input::<type>::<datatype>::extract_entries which takes args: 1: Biber object 2: Datasource name 3: Reference to an array of cite keys to look for and returns an array of the cite keys it did not find in the datasource get_dependents Get dependents of the entries for a given list of citekeys. Is called recursively until there are no more dependents to look for. remove_undef_dependent Remove undefined dependent keys from an entry using a map of dependent keys to entries _parse_sort Convenience sub to parse a .bcf sorting section and return nice sorting object _filedump and _stringdump Dump the biber object with Data::Dump for debugging
AUTHORS
Francois Charette, "<firmicus at ankabut.net>" Philip Kime "<philip at kime.org.uk>"
BUGS
Please report any bugs or feature requests on our sourceforge tracker at <https://sourceforge.net/tracker2/?func=browse&group_id=228270>.
COPYRIGHT & LICENSE
Copyright 2009-2013 Francois Charette and Philip Kime, all rights reserved. This module is free software. You can redistribute it and/or modify it under the terms of the Artistic License 2.0. This program is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose.