Provided by: avr-libc_2.2.1-1_all 

NAME
FAQ - Frequently Asked Questions
FAQ Index
• Interrupts
• Why doesn't my program recognize a variable updated in an interrupt routine?
• Why do some 16-bit timer registers sometimes get trashed?
• What ISR names are available for my device?
• What pitfalls exist when writing reentrant code?
• Why are interrupts re-enabled in the middle of writing the stack pointer?
• Why are (many) interrupt flags cleared by writing a logical 1?
• C/C++
• Can I use C++ on the AVR?
• Which -O flag to use?
• Shouldn't I initialize all my variables?
• How do I pass an IO port as a parameter to a function?
• Why do all my 'foo...bar' strings eat up the SRAM?
• How do I put an array of strings completely in ROM?
• How to modify MCUCR or WDTCR early?
• How do I perform a software reset of the AVR?
• On a device with more than 128 KiB of flash, how to make function pointers work?
• What registers are used by the C compiler?
• How to permanently bind a variable to a register?
• Why is assigning ports in a 'chain' a bad idea?
• What is all this _BV() stuff about?
• Is it really impossible to program the ATtinyXX in C?
• (Inline) Assembly
• How do I use a #define'd constant in an asm statement?
• Which AVR-specific assembler operators are available?
• Linking and Binaries
• How do I relocate code to a fixed address?
• How to add a raw binary image to linker output?
• Why are there five different linker scripts?
• Static Analysis
• Which header files are included in my program?
• Which macros are defined in my program? Where are they defined, and to what value?
• How to detect RAM memory and variable overlap problems?
• Debugging
• Why does the PC randomly jump around when single-stepping through my program in avr-gdb?
• How do I trace an assembler file in avr-gdb?
• Hardware
• Why are some addresses of the EEPROM corrupted (usually address zero)?
• My UART is generating nonsense! My ATmega128 keeps crashing! Port F is completely broken!
• Why have 'programmed' fuses the bit value 0?
• How to use external RAM?
• Other
• Why is my baud rate wrong?
• What is this 'clock skew detected' message?
Why doesn't my program recognize a variable updated in an interrupt routine?
When using the optimizer, in a loop like the following one:
uint8_t flag;
...
ISR(SOME_vect) {
flag = 1;
}
...
while (flag == 0) {
...
}
the compiler will typically access flag only once, and optimize further accesses completely away, since
its code path analysis shows that nothing inside the loop could change the value of flag anyway. To tell
the compiler that this variable could be changed outside the scope of its code path analysis (e. g. from
within an interrupt routine), the variable needs to be declared like:
volatile uint8_t flag;
Back to FAQ Index.
How to permanently bind a variable to a register?
This can be done with
register uint8_t counter __asm("r3");
Typically, it should be safe to use r2 through r7 that way.
Registers r8 through r25 can be used for argument passing by the compiler in case many or long arguments
are being passed to callees. If this is not the case throughout the entire application, these registers
could be used for register variables as well.
Extreme care should be taken that the entire application is compiled with a consistent set of register-
allocated variables including possibly used library functions. This can be achieved by compiling each
module with -ffixed-r3 or -ffixed-3. Notice that when you are using library functions from libgcc (the
avr-gcc runtime library) or AVR-LibC, then these libraries were generated without the requirement to
avoid specific registers. Hence when you are using libraries from the distribution, you must make sure
that none of the reserved registers is used in the generated binary.
Also notice that global register variables can't be volatile, because only variables in memory can be
volatile, and register variables are not located in memory.
Back to FAQ Index.
How to modify MCUCR or WDTCR early?
Basically, write a small function which looks like this:
#include <avr/io.h>
static __attribute__((used, unused, naked, section(".init3")))
void init_MCUCR (void);
void init_MCUCR (void)
{
MCUCR = _BV(SRE) | _BV(SRW);
}
Do not call this function by hand! This piece of code will be inserted in startup code, which is run
right after reset. For the meaning of the attributes, see How do I perform a software reset of the AVR?
The advantage of this method is that you can insert any initialization code you want (just remember that
this is very early startup -- no stack and no __zero_reg__ yet), and no program memory space is wasted if
this feature is not used.
There should be no need to modify linker scripts anymore, except for some very special cases. It is best
to leave __stack at its default value (end of internal SRAM -- faster, and required on some devices like
ATmega161 because of errata), and add -Wl,-Tdata,0x801100 to start the data section above the stack.
For more information on using sections, see Memory Sections. There is also an example for In C/C++ Code.
Note that in C code, any such function would preferably be placed into section .init3 as the code in
.init2 ensures the internal register __zero_reg__ is already cleared.
Back to FAQ Index.
What is all this _BV() stuff about?
When performing low-level output work, which is a very central point in microcontroller programming, it
is quite common that a particular bit needs to be set or cleared in some IO register. While the device
documentation provides mnemonic names for the various bits in the IO registers, and the AVR device-
specific IO definitions reflect these names in definitions for numerical constants, a way is needed to
convert a bit number (usually within a byte register) into a byte value that can be assigned directly to
the register. However, sometimes the direct bit numbers are needed as well (e. g. in an SBI()
instruction), so the definitions cannot usefully be made as byte values in the first place.
So in order to access a particular bit number as a byte value, use the _BV() macro. Of course, the
implementation of this macro is just the usual bit shift (which is done by the compiler anyway, thus
doesn't impose any run-time penalty), so the following applies:
_BV(3) => 1 << 3 => 0x08
However, using the macro often makes the program better readable.
Example: clock timer 2 with full IO clock (CS2x = 0b001), toggle OC2 output on compare match (COM2x =
0b01), and clear timer on compare match (CTC2 = 1). Make OC2 (PD7) an output.
TCCR2 = _BV(COM20) | _BV(CTC2) | _BV(CS20);
DDRD = _BV(PD7);
Back to FAQ Index.
Can I use C++ on the AVR?
Basically yes, C++ is supported (assuming your compiler has been configured and compiled to support it,
of course). Source files ending in .cc, .cpp or .C will automatically cause the compiler frontend to
invoke the C++ compiler. Alternatively, the C++ compiler could be explicitly called by the name avr-c++.
However, there's currently no support for libstdc++, the standard support library needed for a complete
C++ implementation. This imposes a number of restrictions on the C++ programs that can be compiled. Among
them are:
• Obviously, none of the C++ related standard functions, classes, and template classes are available.
• The operators new and delete are not implemented, attempting to use them will cause the linker to
complain about undefined external references. (This could perhaps be fixed.)
• Some of the supplied include files are not C++ safe, i. e. they need to be wrapped into
extern "C" { ... }
(This could certainly be fixed, too.)
• Exceptions are not supported. Since exceptions are enabled by default in the C++ frontend, they
explicitly need to be turned off using -fno-exceptions in the compiler options. Failing this, the
linker will complain about an undefined external reference to __gxx_personality_sj0.
Constructors and destructors are supported though, including global ones.
When programming C++ in space- and runtime-sensitive environments like microcontrollers, extra care
should be taken to avoid unwanted side effects of the C++ calling conventions like implied copy
constructors that could be called upon function invocation etc. These things could easily add up into a
considerable amount of time and program memory wasted. Thus, casual inspection of the generated assembler
code (using the -S compiler option) seems to be warranted.
Back to FAQ Index.
Shouldn't I initialize all my variables?
Variables in static storage are guaranteed to be initialized by the C standard. This includes global and
static variables without explicit initializer, which are initialized to 0. avr-gcc does this by placing
the appropriate code into section .init4. With respect to the standard, this sentence is somewhat
simplified (because the standard allows for machines where the actual bit pattern used differs from all
bits being 0), but for the AVR target, in general, all integer-type variables are set to 0, all pointers
to a NULL pointer, and all floating-point variables to 0.0.
As long as these variables are not explicitly initialized, or their initializer is all zeros, they go
into the .bss output section. This section simply records the size of the variable, but otherwise doesn't
consume space, neither within the object file nor within flash memory. (Of course, being a variable, it
will consume space in the target's SRAM.)
In contrast, global and static variables that have a non-zero initializer go into the .data output
section of the file. This will cause them to consume space in the object file (in order to record the
initializing value), and in the flash ROM of the target device. The latter is needed to initialize the
variables in RAM from the initializers kept in ROM during the startup code, so that all variables will
have their expected initial values when main() is entered.
Back to FAQ Index.
Why do some 16-bit timer registers sometimes get trashed?
Some of the timer-related 16-bit IO registers use a temporary register (called TEMP in the AVR datasheet)
to guarantee an atomic access to the register despite the fact that two separate 8-bit IO transfers are
required to actually move the data. Typically, this includes access to the current timer/counter value
register (TCNTn), the input capture register (ICRn), and write access to the output compare registers
(OCRnM). Refer to the actual datasheet for each device's set of registers that involves the TEMP
register.
When accessing one of the registers that use TEMP from the main application, and possibly any other one
from within an interrupt routine, care must be taken that no access from within an interrupt context
could clobber the TEMP register data of an in-progress transaction that has just started elsewhere.
To protect interrupt routines against other interrupt routines, it's usually best to use the ISR() macro
when declaring the interrupt function, and to ensure that interrupts are still disabled when accessing
those 16-bit timer registers.
Within the main program, access to those registers could be encapsulated in calls to the cli() and sei()
macros. If the status of the global interrupt flag before accessing one of those registers is uncertain,
something like the following example code can be used.
uint16_t
read_timer1(void)
{
uint8_t sreg;
uint16_t val;
sreg = SREG;
cli();
val = TCNT1;
SREG = sreg;
return val;
}
Back to FAQ Index.
How do I use a #define'd constant in an asm statement?
So you tried this:
asm volatile ("sbi 0x18, 7");
Which works. When you do the same thing but replace the address of the port by its macro name, like this:
asm volatile ("sbi PORTB, 7");
you get a syntax error from the assembler: 'Error: constant value required'.
PORTB is a precompiler definition included in the processor specific file included in avr/io.h. As you
may know, the precompiler will not touch strings, and PORTB gets passed to the assembler instead of 0x18.
One way to avoid this problem is:
asm volatile ("sbi %0, 7" :: "I" (_SFR_IO_ADDR(PORTB)));
Note
For C programs, rather use the standard C bit operators instead, so the above would be expressed as
PORTB |= (1 << 7). The optimizer will take care to transform this into a single SBI instruction,
assuming the operands allow for this.
There are situation though where the address of a special function register (SFR) is required in inline
assembly. When the register can be accessed by LDS and STS, one can use the RAM address of the SFR:
asm volatile ("sts %0, __zero_reg__" :: "n" (& PORTB));
When the I/O address of the register is required, one way is to use _SFR_IO_ADDR to get the I/O address
like in the example above. A different approach is to use inline asm print modifier i supported since
avr-gcc v4.7:
asm volatile ("out %i0, __zero_reg__" :: "n" (& PORTB));
The i0 will print the address of PORTB as an I/O address.
Back to FAQ Index.
Why does the PC randomly jump around when single-stepping through my program in avr-gdb?
When compiling a program with both optimization (-O) and debug information (-g) which is fortunately
possible in avr-gcc, the code watched in the debugger is optimized code. It is guaranteed that the code
runs with the exact same optimizations as it would run without the -g switch.
Since the compiler is free to reorder code execution as long as the semantics do not change, code is
often rearranged in order to make it possible to use a single branch instruction for conditional
operations. Branch instructions can only cover a short range for the target PC (-63 through +64 words
from the current PC). If a branch instruction cannot be used directly, the compiler needs to work around
it by combining a skip instruction together with a relative jump (rjmp) instruction, which will need one
additional word of ROM.
Another side effect of optimization is that variable usage is restricted to the area of code where it is
actually used. So if a variable was placed in a register at the beginning of some function, this same
register can be re-used later on if the compiler notices that the first variable is no longer used inside
that function, even though the variable is still in lexical scope. When trying to examine the variable in
avr-gdb, the displayed result will then look garbled.
So in order to avoid these side effects, optimization can be turned off while debugging, or at least
optimization level -Og can be used which was introduced to improve good debugging experience while it
still provides a reasonable amount of optimization.
However, some of these optimizations might also have the side effect of uncovering bugs that would
otherwise not be obvious, so it must be noted that turning off optimization can easily change the bug
pattern. In most cases, you are better off leaving optimizations enabled while debugging.
Back to FAQ Index.
How do I trace an assembler file in avr-gdb?
When using the -g compiler option, avr-gcc only generates line number and other debug information for C
(and C++) files that pass the compiler. Functions that don't have line number information will be
completely skipped by a single step command in gdb. This includes functions linked from a standard
library, but by default also functions defined in an assembler source file, since the -g compiler switch
does not apply to the assembler.
So in order to debug an assembler input file (possibly one that has to be passed through the C
preprocessor), it's the assembler that needs to be told to include line-number information into the
output file. (Other debug information like data types and variable allocation cannot be generated, since
unlike a compiler, the assembler basically doesn't know about this.) This is done using the (GNU)
assembler option --gstabs.
Example:
$ avr-as -mmcu=atmega128 --gstabs -o foo.o foo.s
When the assembler is not called directly but through the C compiler frontend (either implicitly by
passing a source file ending in .S, or explicitly using -x assembler-with-cpp), the compiler frontend
needs to be told to pass the --gstabs option down to the assembler. This is done using -Wa,--gstabs.
Please take care to only pass this option when compiling an assembler input file. Otherwise, the
assembler code that results from the C compilation stage will also get line number information, which
confuses the debugger.
Note
You can also use -Wa,-gstabs since the compiler will add the extra '-' for you.
Example:
$ EXTRA_OPTS='-Wall -mmcu=atmega128 -x assembler-with-cpp'
$ avr-gcc -Wa,--gstabs ${EXTRA_OPTS} -c -o foo.o foo.S
Also note that the debugger might get confused when entering a piece of code that has a non-local label
before, since it then takes this label as the name of a new function that appears to have been entered.
Thus, the best practice to avoid this confusion is to only use non-local labels when declaring a new
function, and restrict anything else to local labels. Local labels consist just of a number only.
References to these labels consist of the number, followed by the letter b for a backward reference, or f
for a forward reference. These local labels may be re-used within the source file, references will pick
the closest label with the same number and given direction.
Example:
myfunc:
push r16
push r17
push r18
push YL
push YH
...
clr r16 ; start loop
ldi YL, lo8(sometable)
ldi YH, hi8(sometable)
rjmp 2f ; jump to loop test at end
1: ld r17, Y+ ; loop continues here
...
breq 3f ; return from myfunc prematurely
...
inc r16
2: cmp r16, r18
brlo 1b ; jump back to top of loop
3: pop YH
pop YL
pop r18
pop r17
pop r16
ret
Back to FAQ Index.
How do I pass an IO port as a parameter to a function?
Consider this example code:
#include <inttypes.h>
#include <avr/io.h>
void
set_bits_func_wrong (volatile uint8_t port, uint8_t mask)
{
port |= mask;
}
void
set_bits_func_correct (volatile uint8_t *port, uint8_t mask)
{
*port |= mask;
}
#define set_bits_macro(port,mask) ((port) |= (mask))
int main (void)
{
set_bits_func_wrong (PORTB, 0xaa);
set_bits_func_correct (&PORTB, 0x55);
set_bits_macro (PORTB, 0xf0);
return (0);
}
The first function will generate object code which is not even close to what is intended. The major
problem arises when the function is called. When the compiler sees this call, it will actually pass the
value of the PORTB register (using an IN instruction), instead of passing the address of PORTB (e.g.
memory mapped io addr of 0x38, io port 0x18 for the mega128). This is seen clearly when looking at the
disassembly of the call:
set_bits_func_wrong (PORTB, 0xaa);
10a: 6a ea ldi r22, 0xAA
10c: 88 b3 in r24, 0x18
10e: 0e 94 65 00 call 0xca
So, the function, once called, only sees the value of the port register and knows nothing about which
port it came from. At this point, whatever object code is generated for the function by the compiler is
irrelevant. The interested reader can examine the full disassembly to see that the function's body is
completely fubar.
The second function shows how to pass (by reference) the memory mapped address of the io port to the
function so that you can read and write to it in the function. Here's the object code generated for the
function call:
set_bits_func_correct (&PORTB, 0x55);
112: 65 e5 ldi r22, 0x55
114: 88 e3 ldi r24, 0x38
116: 90 e0 ldi r25, 0x00
118: 0e 94 7c 00 call 0xf8
You can clearly see that 0x0038 is correctly passed for the address of the io port. Looking at the
disassembled object code for the body of the function, we can see that the function is indeed performing
the operation we intended:
void
set_bits_func_correct (volatile uint8_t *port, uint8_t mask)
{
f8: fc 01 movw r30, r24
*port |= mask;
fa: 80 81 ld r24, Z
fc: 86 2b or r24, r22
fe: 80 83 st Z, r24
}
100: 08 95 ret
Notice that we are accessing the io port via the LD and ST instructions.
The port parameter must be volatile to avoid a compiler warning.
Note
Because of the nature of the IN and OUT assembly instructions, they can not be used inside the
function when passing the port in this way. Readers interested in the details should consult the
Instruction Set datasheet.
Finally we come to the macro version of the operation. In this contrived example, the macro is the most
efficient method with respect to both execution speed and code size:
set_bits_macro (PORTB, 0xf0);
11c: 88 b3 in r24, 0x18
11e: 80 6f ori r24, 0xF0
120: 88 bb out 0x18, r24
Of course, in a real application, you might be doing a lot more in your function which uses a passed by
reference io port address and thus the use of a function over a macro could save you some code space, but
still at a cost of execution speed.
Care should be taken when such an indirect port access is going to one of the 16-bit IO registers where
the order of write access is critical (like some timer registers). All versions of avr-gcc up to 3.3 will
generate instructions that use the wrong access order in this situation (since with normal memory
operands where the order doesn't matter, this sometimes yields shorter code).
See http://mail.gnu.org/archive/html/avr-libc-dev/2003-01/msg00044.html for a possible workaround.
avr-gcc versions after 3.3 have been fixed in a way where this optimization will be disabled if the
respective pointer variable is declared to be volatile, so the correct behaviour for 16-bit IO ports can
be forced that way.
Back to FAQ Index.
What registers are used by the C compiler?
See also the Type Layout, Register Layout and Calling Convention sections in the avr-gcc Wiki.
Data types
char is 8 bits, int and short are 16 bits, long is 32 bits, long long is 64 bits, float is 32 bits,
double and long double are 32 bits or 64 bits, pointers are 16 bits (function pointers are word
addresses to allow addressing up to 128K program memory space).
• There is a -mint8 option (see Options for the C compiler avr-gcc) to make int and short 8 bits, long 16
bits and long long 32 bits. But that is not supported by AVR-LibC (except for stdint.h and
avr/pgmspace.h, but no 64-bit integer types are available) and violates C standards (int must be at
least 16 bits).
Call-used registers (r18-r27, r30-r31)
May be allocated by gcc for local data. You may use them freely in assembly subroutines. Calling C
subroutines can clobber any of them - the caller is responsible for saving and restoring.
For the AVR_TINY architecture (ATtiny10 and relatives), r20-r27 and r30-31 are call-clobbered.
Call-saved registers (r2-r17, r28-r29)
May be allocated by gcc for local data. Calling C subroutines leaves them unchanged. Assembly
subroutines are responsible for saving and restoring these registers, if changed. r29:r28 (Y pointer)
is used as a frame pointer (points to local data on stack) if necessary. The requirement for the
callee to save/preserve the contents of these registers even applies in situations where the compiler
assigns them for argument passing.
For the AVR_TINY architecture (ATtiny10 etc.), r18-r19 and r28-r29 are call-saved. Registers r0 through
r15 do not exist.
Fixed registers (r0, r1)
Never allocated by gcc for local data, but often used for fixed purposes:
• r0 (__tmp_reg__) --- temporary register, can be clobbered by any code (except interrupt handlers which
save it), may be used to remember something for a while within one piece of assembly code
• r1 (__zero_reg__) --- assumed to be always zero in any C code, may be used to remember something for a
while within one piece of assembler code, but must then be cleared after use (clr __zero_reg__). This
includes any use of the [f]mul[s[u]] instructions, which return their result in r1:r0. Interrupt
handlers save and clear __zero_reg__ on entry, and restore it on exit (in case it was non-zero).
• T flag --- the T flag in the status register (SREG) can be used the same way like __tmp_reg__.
For the AVR_TINY architecture (ATtiny10 etc.), __tmp_reg__ is r16, and __zero_reg__ is r17.
Function call conventions
Arguments - allocated left to right, r25 to r8. All arguments are aligned to start in even-numbered
registers (odd-sized arguments, including char, have one free register above them). This allows
making better use of the movw instruction on the enhanced core.
If too many, those that don't fit are passed on the stack.
On AVR_TINY, r25 to r20 are used to pass values.
• Return values: 8-bit in r24, 16-bit in r25:r24, up to 32 bits in r22-r25, up to 64 bits in r18-r25.
• Arguments to functions with a variable number of lists like printf get all their values on the stack.
char is extended to int, and float is extended to double.
• When an argument is passed on the stack, all subsequent arguments are also passed on the stack.
• An argument is either passed completely in registers or completely on the stack.
• Arguments with a size of zero or with a size larger than 8 bytes (4 bytes on AVR_TINY) are returned in
memory. The caller provides the memory location as implicit first argument to the callee.
• When an argument is returned in registers, its size is padded to the next power of 2.
Back to FAQ Index.
How do I put an array of strings completely in ROM?
There are times when you may need an array of strings which will never be modified. In this case, you
don't want to waste ram storing the constant strings. The most obvious (and incorrect) thing to do is
this:
#include <avr/pgmspace.h>
const char* const array[2] PROGMEM = {
"Foo",
"Bar"
};
int main (void)
{
char buf[32];
strcpy_P (buf, array[1]);
return 0;
}
The result is not what you want though. What you end up with is the array stored in ROM, while the
individual strings end up in RAM (in some .rodata input section).
To work around this, you need to do something like this:
#include <avr/pgmspace.h>
static const char foo[] PROGMEM = "Foo";
static const char bar[] PROGMEM = "Bar";
const char* const array[2] PROGMEM = {
foo,
bar
};
void func (uint8_t i)
{
char buf[32];
const char *p = pgm_read_ptr (&array[i]);
strcpy_P (buf, p);
}
Looking at the disassembly of the resulting object file we see that array is in flash as such:
00000026 <array>:
26: 2e 00 2a 00
0000002a <bar>:
2a: 42 61 72 00 Bar.
0000002e <foo>:
2e: 46 6f 6f 00 Foo.
foo is at address 0x002e.
bar is at address 0x002a.
array is at address 0x0026.
Using named address-spaces
An alternative is to use the named address-space __flash, which is supported since avr-gcc v4.7 and in
GNU-C99 and up:
#include <avr/pgmspace.h>
#define F(X) ((const __flash char[]) { X })
const __flash char* const __flash array[] =
{
F("Foo"), F("Bar")
};
int compare (const char *str, uint8_t i)
{
return strcmp_P (str, array[i]);
}
Moreover, there is no more need for pgm_read_xxx(): The (addresses of) the string literals can be read
directly by means of array[i]. The header is only needed for the strcmp_P prototype.
Back to FAQ Index.
How to use external RAM?
Well, there is no universal answer to this question; it depends on what the external RAM is going to be
used for.
Basically, the bit SRE (SRAM enable) in the MCUCR register needs to be set in order to enable the
external memory interface. Depending on the device to be used, and the application details, further
registers affecting the external memory operation like XMCRA and XMCRB, and/or further bits in MCUCR
might be configured. Refer to the datasheet for details.
If the external RAM is going to be used to store the variables from the C program (i. e., the .data
and/or .bss segment) in that memory area, it is essential to set up the external memory interface early
during the device initialization so the initialization of these variable will take place. Refer to How to
modify MCUCR or WDTCR early? for a description how to do this using few lines of assembler code, or to
the chapter about memory sections for an example written in C.
The explanation of malloc() contains a discussion about the use of internal RAM vs. external RAM in
particular with respect to the various possible locations of the heap (area reserved for malloc()). It
also explains the linker command-line options that are required to move the memory regions away from
their respective standard locations in internal RAM.
Finally, if the application simply wants to use the additional RAM for private data storage kept outside
the domain of the C compiler (e. g. through a char * variable initialized directly to a particular
address), it would be sufficient to defer the initialization of the external RAM interface to the
beginning of main(), so no tweaking of the .init3 section is necessary. The same applies if only the heap
is going to be located there, since the application start-up code does not affect the heap.
It is not recommended to locate the stack in external RAM. In general, accessing external RAM is slower
than internal RAM, and errata of some AVR devices even prevent this configuration from working properly
at all.
Back to FAQ Index.
Which -O flag to use?
There's a common misconception that larger numbers behind the -O option might automatically cause
'better' optimization. First, there's no universal definition for 'better', with optimization often being
a speed vs. code size trade off. See the detailed discussion for which option affects which part of the
code generation.
A test case was run on an ATmega128 to judge the effect of compiling the library itself using different
optimization levels. The following table lists the results. The test case consisted of around 2 KB of
strings to sort. Test #1 used qsort() using the standard library strcmp(), test #2 used a function that
sorted the strings by their size (thus had two calls to strlen() per invocation).
When comparing the resulting code size, it should be noted that a floating point version of fvprintf()
was linked into the binary (in order to print out the time elapsed) which is entirely not affected by the
different optimization levels, and added about 2.5 KB to the code.
Optimization Flags Size of .text Time for Test #1 Time for Test #2 -O3 6898 903 s 19.7 ms -O2 6666 972
s 20.1 ms -Os 6618 955 s 20.1 ms -Os -mcall-prologues 6474 972 s 20.1 ms
(The difference between 955 s and 972 s was just a single timer-tick, so take this with a grain of salt.)
So generally, it seems -Os -mcall-prologues is the most universal 'best' optimization level. Only
applications that need to get the last few percent of speed benefit from using -O3.
Back to FAQ Index.
How do I relocate code to a fixed address?
First, put the function into a new, orphan named section. This is done with a section attribute that
specifies the name of the input section with the prototype of the function:
__attribute__ ((noinline, noclone, section (".bootloader")))
void boot (void);
The noinline and noclone attributes are required to make sure that the function is not (partially)
inlined into the caller, which does not have a respective section attribute.
Second, locate the section to the desired fixed address by means of linking with, say
-Wl,--section-start,.bootloader=0x1E000
see the -Wl compiler option. The name after --section-start is the name of the section to be located,
and the number specifies the beginning address of the named section.
This will only work when the section is an orphan section, i.e. a section that is not mentioned in the
linker script. For sections that are mentioned in the linker script, like for example .text.bootloader,
this will not work because --section-start refers to an output section, but the output section for input
section .text.bootloader is the .text section.
To verify that everything went as expected, generate the disassembly with avr-objdump ... -j .bootloader.
The top of the list file will show
1 .bootloader 00000004 00002000 00002000 00000204 2**0
CONTENTS, ALLOC, LOAD, READONLY, CODE
Or display section properties with avr-readelf --section-details
$ avr-readelf -t main.elf
Section Headers:
[Nr] Name
Type Addr Off Size ES Lk Inf Al
Flags
[ 2] .bootloader
PROGBITS 00002000 000204 000004 00 0 0 1
[00000006]: ALLOC, EXEC
A different way to locate the section is by means of a custom linker script. The avr-gcc Wiki has an
example that locates the .progmem2.data section which is used by the compiler for variables in address-
space __flash2.
Back to FAQ Index.
My UART is generating nonsense! My ATmega128 keeps crashing! Port F is completely broken!
Well, certain odd problems arise out of the situation that the AVR devices as shipped by Atmel often come
with a default fuse bit configuration that doesn't match the user's expectations. Here is a list of
things to care for:
• All devices that have an internal RC oscillator ship with the fuse enabled that causes the device to
run off this oscillator, instead of an external crystal. This often remains unnoticed until the first
attempt is made to use something critical in timing, like UART communication.
• The ATmega128 ships with the fuse enabled that turns this device into ATmega103 compatibility mode.
This means that some ports are not fully usable, and in particular that the internal SRAM is located at
lower addresses. Since by default, the stack is located at the top of internal SRAM, a program compiled
for an ATmega128 running on such a device will immediately crash upon the first function call (or
rather, upon the first function return).
• Devices with a JTAG interface have the JTAGEN fuse programmed by default. This will make the respective
port pins that are used for the JTAG interface unavailable for regular IO.
Back to FAQ Index.
Why do all my 'foo...bar' strings eat up the SRAM?
By default, all strings are handled as all other initialized variables: they occupy RAM (even though the
compiler might warn you when it detects write attempts to these RAM locations), and occupy the same
amount of flash ROM so they can be initialized to the actual string by startup code.
That way, any string literal will be a valid argument to any C function that expects a const char*
argument.
Of course, this is going to waste a lot of SRAM. In Program Space String Utilities, a method is described
how such constant data can be moved out to flash ROM. However, a constant string located in flash ROM is
no longer a valid argument to pass to a function that expects a const char*-type string, since the AVR
processor needs the special instruction LPM to access these strings. Thus, separate functions are needed
that take this into account. Many of the standard C library functions have equivalents available where
one of the string arguments can be located in flash ROM. Private functions in the applications need to
handle this, too. For example, the following can be used to implement simple debugging messages that will
be sent through a UART:
#include <inttypes.h>
#include <avr/io.h>
#include <avr/pgmspace.h>
int uart_putchar(char c)
{
if (c == '\n')
uart_putchar('\r');
loop_until_bit_is_set(USR, UDRE);
UDR = c;
return 0; /* so it could be used for fdevopen(), too */
}
void debug_P(const char *addr)
{
char c;
while ((c = pgm_read_byte(addr++)))
uart_putchar(c);
}
int main(void)
{
ioinit(); /* initialize UART, ... */
debug_P(PSTR("foo was here\n"));
return 0;
}
Note
By convention, the suffix _P to the function name is used as an indication that this function is
going to accept a 'program-space string'. Note also the use of the PSTR() macro.
Back to FAQ Index.
How to detect RAM memory and variable overlap problems?
You can simply run avr-nm on your output (ELF) file. Run it with the -n option, and it will sort the
symbols numerically (by default, they are sorted alphabetically).
Look for the symbol _end, that's the first address in RAM that is not allocated by a variable. (avr-gcc
internally adds 0x800000 to all data/bss variable addresses, so please ignore this offset.) Then, the
run-time initialization code initializes the stack pointer (by default) to point to the last available
address in (internal) SRAM. Thus, the region between _end and the end of SRAM is what is available for
stack. (If your application uses malloc(), which e. g. also can happen inside printf(), the heap for
dynamic memory is also located there. See Memory Areas and Using malloc().)
The amount of stack required for your application cannot be determined that easily. For example, if you
recursively call a function and forget to break that recursion, the amount of stack required is infinite.
:-) You can look at the generated assembler code (avr-gcc ... -S), there's a comment in each generated
assembler file that tells you the frame size for each generated function. That's the amount of stack
required for this function, you have to add up that for all functions where you know that the calls could
be nested.
Back to FAQ Index.
Is it really impossible to program the ATtinyXX in C?
While some small AVRs are not directly supported by the C compiler since they do not have a RAM-based
stack (and some do not even have RAM at all), it is possible anyway to use the general-purpose registers
as a RAM replacement since they are mapped into the data memory region.
Bruce D. Lightner wrote an excellent description of how to do this, and offers this together with a
toolkit on his web page:
http://lightner.net/avr/ATtinyAvrGcc.html
Back to FAQ Index.
What is this 'clock skew detected' message?
It's a known problem of the MS-DOS FAT file system. Since the FAT file system has only a granularity of 2
seconds for maintaining a file's timestamp, and it seems that some MS-DOS derivative (Win9x) perhaps
rounds up the current time to the next second when calculating the timestamp of an updated file in case
the current time cannot be represented in FAT's terms, this causes a situation where make sees a 'file
coming from the future'.
Since all make decisions are based on file timestamps, and their dependencies, make warns about this
situation.
Solution: don't use inferior file systems / operating systems. Neither Unix file systems nor HPFS (aka
NTFS) do experience that problem.
Workaround: after saving the file, wait a second before starting make. Or simply ignore the warning. If
you are paranoid, execute a make clean all to make sure everything gets rebuilt.
In networked environments where the files are accessed from a file server, this message can also happen
if the file server's clock differs too much from the network client's clock. In this case, the solution
is to use a proper time keeping protocol on both systems, like NTP. As a workaround, synchronize the
client's clock frequently with the server's clock.
Back to FAQ Index.
Why are (many) interrupt flags cleared by writing a logical 1?
Usually, each interrupt has its own interrupt flag bit in some control register, indicating the specified
interrupt condition has been met by representing a logical 1 in the respective bit position. When working
with interrupt handlers, this interrupt flag bit usually gets cleared automatically in the course of
processing the interrupt, sometimes by just calling the handler at all, sometimes (e. g. for the U[S]ART)
by reading a particular hardware register that will normally happen anyway when processing the interrupt.
From the hardware's point of view, an interrupt is asserted as long as the respective bit is set, while
global interrupts are enabled. Thus, it is essential to have the bit cleared before interrupts get re-
enabled again (which usually happens when returning from an interrupt handler).
Only few subsystems require an explicit action to clear the interrupt request when using interrupt
handlers. (The notable exception is the TWI interface, where clearing the interrupt indicates to proceed
with the TWI bus hardware handshake, so it's never done automatically.)
However, if no normal interrupt handlers are to be used, or in order to make extra sure any pending
interrupt gets cleared before re-activating global interrupts (e. g. an external edge-triggered one), it
can be necessary to explicitly clear the respective hardware interrupt bit by software. This is usually
done by writing a logical 1 into this bit position. This seems to be illogical at first, the bit position
already carries a logical 1 when reading it, so why does writing a logical 1 to it clear the interrupt
bit?
The solution is simple: writing a logical 1 to it requires only a single OUT instruction, and it is clear
that only this single interrupt request bit will be cleared. There is no need to perform a read-modify-
write cycle (like, an SBI instruction), since all bits in these control registers are interrupt bits, and
writing a logical 0 to the remaining bits (as it is done by the simple OUT instruction) will not alter
them, so there is no risk of any race condition that might accidentally clear another interrupt request
bit. So instead of writing
TIFR |= _BV(TOV0); /* wrong! */
simply use
TIFR = _BV(TOV0);
Back to FAQ Index.
Why have 'programmed' fuses the bit value 0?
Basically, fuses are just a bit in a special EEPROM area. For technical reasons, erased E[E]PROM cells
have all bits set to the value 1, so unprogrammed fuses also have a logical 1. Conversely, programmed
fuse cells read out as bit value 0.
Back to FAQ Index.
Which AVR-specific assembler operators are available?
See Pseudo-Ops and Operand Modifiers.
Back to FAQ Index.
Why are interrupts re-enabled in the middle of writing the stack pointer?
When setting up space for local variables on the stack, the compiler generates code like this:
/* prologue: frame size=20 */
push r28
push r29
in r28,__SP_L__
in r29,__SP_H__
sbiw r28,20
in __tmp_reg__,__SREG__
cli
out __SP_H__,r29
out __SREG__,__tmp_reg__
out __SP_L__,r28
/* prologue end (size=10) */
It reads the current stack pointer value, decrements it by the required amount of bytes, then disables
interrupts, writes back the high part of the stack pointer, writes back the saved SREG (which will
eventually re-enable interrupts if they have been enabled before), and finally writes the low part of the
stack pointer.
At the first glance, there's a race between restoring SREG, and writing SPL. However, after enabling
interrupts (either explicitly by setting the I flag, or by restoring it as part of the entire SREG), the
AVR hardware executes (at least) the next instruction still with interrupts disabled, so the write to SPL
is guaranteed to be executed with interrupts disabled still. Thus, the emitted sequence ensures
interrupts will be disabled only for the minimum time required to guarantee the integrity of this
operation.
Back to FAQ Index.
Why are there five different linker scripts?
From a comment in the source code:
Which one of the five linker script files is actually used depends on command line options given to ld.
A .x script file is the default script A .xr script is for linking without relocation (-r flag) A .xu
script is like .xr but *do* create constructors (-Ur flag) A .xn script is for linking with -n flag (mix
text and data on same page). A .xbn script is for linking with -N flag (mix text and data on same page).
Back to FAQ Index.
How to add a raw binary image to linker output?
The GNU linker avr-ld cannot handle binary data directly. However, there's a companion tool called avr-
objcopy. This is already known from the output side: it's used to extract the contents of the linked ELF
file into an Intel Hex load file.
avr-objcopy can create a relocatable object file from arbitrary binary input, like
avr-objcopy -I binary -O elf32-avr foo.bin foo.o
This will create a file named foo.o, with the contents of foo.bin. The contents will default to section
.data, and two symbols will be created named _binary_foo_bin_start and _binary_foo_bin_end. These symbols
can be referred to inside a C source to access these data.
If the goal is to have those data go to flash ROM (similar to having used the PROGMEM attribute in C
source code), the sections have to be renamed while copying, and it's also useful to set the section
flags:
avr-objcopy --rename-section .data=.progmem.data,contents,alloc,load,readonly,data -I binary -O elf32-avr foo.bin foo.o
Note that all this could be conveniently wired into a Makefile, so whenever foo.bin changes, it will
trigger the recreation of foo.o, and a subsequent relink of the final ELF file.
Below are two Makefile fragments that provide rules to convert a .txt file to an object file, and to
convert a .bin file to an object file:
$(OBJDIR)/%.o : %.txt
@echo Converting $<
@cp $(<) $(*).tmp
@echo -n 0 | tr 0 '\000' >> $(*).tmp
@$(OBJCOPY) -I binary -O elf32-avr \
--rename-section .data=.progmem.data,contents,alloc,load,readonly,data \
--redefine-sym _binary_$*_tmp_start=$* \
--redefine-sym _binary_$*_tmp_end=$*_end \
--redefine-sym _binary_$*_tmp_size=$*_size_sym \
$(*).tmp $(@)
@echo "extern const char" $(*)"[] PROGMEM;" > $(*).h
@echo "extern const char" $(*)_end"[] PROGMEM;" >> $(*).h
@echo "extern const char" $(*)_size_sym"[];" >> $(*).h
@echo "#define $(*)_size ((int)$(*)_size_sym)" >> $(*).h
@rm $(*).tmp
$(OBJDIR)/%.o : %.bin
@echo Converting $<
@$(OBJCOPY) -I binary -O elf32-avr \
--rename-section .data=.progmem.data,contents,alloc,load,readonly,data \
--redefine-sym _binary_$*_bin_start=$* \
--redefine-sym _binary_$*_bin_end=$*_end \
--redefine-sym _binary_$*_bin_size=$*_size_sym \
$(<) $(@)
@echo "extern const char" $(*)"[] PROGMEM;" > $(*).h
@echo "extern const char" $(*)_end"[] PROGMEM;" >> $(*).h
@echo "extern const char" $(*)_size_sym"[];" >> $(*).h
@echo "#define $(*)_size ((int)$(*)_size_sym)" >> $(*).h
Back to FAQ Index.
How do I perform a software reset of the AVR?
The canonical way to perform a software reset of non-XMega AVR's is to use the watchdog timer. Enable the
watchdog timer to the shortest timeout setting, then go into an infinite, do-nothing loop. The watchdog
will then reset the processor.
XMega parts have a specific bit RST_SWRST_bm in the RST.CTRL register, that generates a hardware reset.
RST_SWRST_bm is protected by the XMega Configuration Change Protection system.
The reason why using the watchdog timer or RST_SWRST_bm is preferable over jumping to the reset vector,
is that when the watchdog or RST_SWRST_bm resets the AVR, the registers will be reset to their known,
default settings. Whereas jumping to the reset vector will leave the registers in their previous state,
which is generally not a good idea.
CAUTION!
Older AVRs will have the watchdog timer disabled on a reset. For these older AVRs, doing a soft reset
by enabling the watchdog is easy, as the watchdog will then be disabled after the reset. On newer
AVRs, once the watchdog is enabled, then it stays enabled, even after a reset! For these newer AVRs a
function needs to be added to the .init3 section (i.e. during the startup code, before main()) to
disable the watchdog early enough so it does not continually reset the AVR.
Here is some example code that creates a macro that can be called to perform a soft reset:
#include <avr/wdt.h>
static inline __attribute__((__always_inline__))
void soft_reset (void)
{
wdt_enable (WDTO_15MS);
for(;;) {}
}
For newer AVRs (such as the ATmega1281) also add this function to your code to then disable the watchdog
after a reset (e.g., after a soft reset):
#include <avr/wdt.h>
// Function Pototype
static __attribute__((used, unused, naked, section(".init3")))
void wdt_init (void);
// Function Implementation
void wdt_init (void)
{
MCUSR = 0;
wdt_disable();
}
The code is placed in section .init3 so that it is executed as part of the normal startup procedure. The
naked attribute is required so that the code does not return (Code in init sections is executed as it is
located; the code is not called, and code from one init section falls through to the code in the next
one). The used attribute makes sure that the compiler does not throw the seemingly unused function away.
The unused attributes avoids warnings about unused code.
Back to FAQ Index.
What pitfalls exist when writing reentrant code?
Reentrant code means the ability for a piece of code to be called simultaneously from two or more
threads. Attention to re-enterability is needed when using a multi-tasking operating system, or when
using interrupts since an interrupt is really a temporary thread.
The code generated natively by gcc is reentrant. But, only some of the libraries in AVR-LibC are
explicitly reentrant, and some are known not to be reentrant. In general, any library call that reads and
writes global variables (including I/O registers) is not reentrant. This is because more than one thread
could read or write the same storage at the same time, unaware that other threads are doing the same, and
create inconsistent and/or erroneous results.
A library call that is known not to be reentrant will work if it is used only within one thread and no
other thread makes use of a library call that shares common storage with it.
Below is a table of library calls with known issues.
Library Call Reentrant Issue Workaround / Alternative rand, random Uses global variables to keep state
information. Use special reentrant versions: rand_r, random_r. strtof, strtod, strtol, strtoul Uses the
global variable errno to return success/failure. Ignore errno, or protect calls with cli/sei or
ATOMIC_BLOCK if the application can tolerate it. Or use scanf or scanf_P if possible. malloc, realloc,
calloc, free Uses the stack pointer and global variables to allocate and free memory. Protect calls with
cli/sei or ATOMIC_BLOCK if the application can tolerate it. If using an OS, use the OS provided memory
allocator since the OS is likely modifying the stack pointer anyway. fdevopen, fclose Uses calloc and
free. Protect calls with cli/sei or ATOMIC_BLOCK if the application can tolerate it. Or use
fdev_setup_stream or FDEV_SETUP_STREAM.
Note: fclose will only call free if the stream has been opened with fdevopen. eeprom_*, boot_*
Accesses I/O registers. Protect calls with cli/sei, ATOMIC_BLOCK, or use OS locking. pgm_*_far Accesses
I/O register RAMPZ. Starting with GCC 4.3, RAMPZ is automatically saved for ISRs, so nothing further is
needed if only using interrupts.
Some OSes may automatically preserve RAMPZ during context switching. Check the OS documentation before
assuming it does.
Otherwise, protect calls with cli/sei, ATOMIC_BLOCK, or use explicit OS locking. printf, printf_P,
vprintf, puts, puts_P Alters flags and character count in global FILE stdout. Use only in one thread. Or
if returned character count is unimportant, do not use the *_P versions.
Note: Formatting to a string output, e.g. sprintf, sprintf_P, snprintf, snprintf_P, vsprintf, vsprintf_P,
vsnprintf, vsnprintf_P, is thread safe. The formatted string could then be followed by an fwrite which
simply calls the lower layer to send the string. fprintf, fprintf_P, vfprintf, vfprintf_P, fputs,
fputs_P Alters flags and character count in the FILE argument. Problems can occur if a global FILE is
used from multiple threads. Assign each thread its own FILE for output. Or if returned character count
is unimportant, do not use the *_P versions. assert Contains an embedded fprintf. See above for
fprintf. See above for fprintf. clearerr Alters flags in the FILE argument. Assign each thread its own
FILE for output.
getchar, gets Alters flags, character count, and unget buffer in global FILE stdin. Use only in one
thread. ***
fgetc, ungetc, fgets, scanf, scanf_P, fscanf, fscanf_P, vscanf, vfscanf, vfscanf_P, fread Alters flags,
character count, and unget buffer in the FILE argument. Assign each thread its own FILE for input. ***
Note: Scanning from a string, e.g. sscanf and sscanf_P, are thread safe.
Note
It's not clear one would ever want to do character input simultaneously from more than one thread
anyway, but these entries are included for completeness.
An effort will be made to keep this table up to date if any new issues are discovered or introduced.
Back to FAQ Index.
Why are some addresses of the EEPROM corrupted (usually address zero)?
The two most common reason for EEPROM corruption is either writing to the EEPROM beyond the datasheet
endurance specification, or resetting the AVR while an EEPROM write is in progress.
EEPROM writes can take up to tens of milliseconds to complete. So that the CPU is not tied up for that
long of time, an internal state-machine handles EEPROM write requests. The EEPROM state-machine expects
to have all of the EEPROM registers setup, then an EEPROM write request to start the process. Once the
EEPROM state-machine has started, changing EEPROM related registers during an EEPROM write is guaranteed
to corrupt the EEPROM write process. The datasheet always shows the proper way to tell when a write is in
progress, so that the registers are not changed by the user's program. The EEPROM state-machine will
always complete the write in progress unless power is removed from the device.
As with all EEPROM technology, if power fails during an EEPROM write the state of the byte being written
is undefined.
In older generation AVRs the EEPROM Address Register (EEAR) is initialized to zero on reset, be it from
Brown Out Detect, Watchdog or the Reset Pin. If an EEPROM write has just started at the time of the
reset, the write will be completed, but now at address zero instead of the requested address. If the
reset occurs later in the write process both the requested address and address zero may be corrupted.
To distinguish which AVRs may exhibit the corrupt of address zero while a write is in process during a
reset, look at the 'initial value' section for the EEPROM Address Register. If EEAR shows the initial
value as 0x00 or 0x0000, then address zero and possibly the one being written will be corrupted. Newer
parts show the initial value as 'undefined', these will not corrupt address zero during a reset (unless
it was address zero that was being written).
EEPROMs have limited write endurance. The datasheet specifies the number of EEPROM writes that are
guaranteed to function across the full temperature specification of the AVR, for a given byte. A read
should always be performed before a write, to see if the value in the EEPROM actually needs to be
written, so not to cause unnecessary EEPROM wear.
The failure mechanism for an overwritten byte is generally one of 'stuck' bits, i. e. a bit will stay at
a one or zero state regardless of the byte written. Also a write followed by a read may return the
correct data, but the data will change with the passage of time, due the EEPROM's inability to hold a
charge from the excessive write wear.
Back to FAQ Index.
Why is my baud rate wrong?
Some AVR datasheets give the following formula for calculating baud rates:
(F_CPU/(UART_BAUD_RATE*16L)-1)
Unfortunately that formula does not work with all combinations of clock speeds and baud rates due to
integer truncation during the division operator.
When doing integer division it is usually better to round to the nearest integer, rather than to the
lowest. To do this add 0.5 (i. e. half the value of the denominator) to the numerator before the
division, resulting in the formula:
((F_CPU + UART_BAUD_RATE * 8L) / (UART_BAUD_RATE * 16L) - 1)
This is also the way it is implemented in <util/setbaud.h>: Helper macros for baud rate calculations.
Back to FAQ Index.
On a device with more than 128 KiB of flash, how to make function pointers work?
Function pointers beyond the 'magical' 128 KiB barrier(s) on larger devices are supposed to be resolved
through so-called trampolines by the linker, so the actual pointers used in the code can remain 16 bits
wide.
In order for this to work, the option -mrelax must be given on the compiler command-line that is used to
link the final ELF file. (Older compilers did not implement this option for the AVR, use -Wl,--relax
instead.)
See also the avr-gcc online documentation on the EIND special function register and indirect calls.
Back to FAQ Index.
Why is assigning ports in a 'chain' a bad idea?
Suppose a number of IO port registers should get the value 0xff assigned. Conveniently, it is implemented
like this:
DDRB = DDRD = 0xff;
According to the rules of the C language, this causes 0xff to be assigned to DDRD, then DDRD is read
back, and the value is assigned to DDRB. The compiler stands no chance to optimize the readback away, as
an IO port register is declared 'volatile'. Thus, chaining that kind of IO port assignments would better
be avoided, using explicit assignments instead:
DDRB = 0xff;
DDRD = 0xff;
Even worse ist this, e. g. on an ATmega1281:
DDRA = DDRB = DDRC = DDRD = DDRE = DDRF = DDRG = 0xff;
The same happens as outlined above. However, when reading back register DDRG, this register only
implements 6 out of the 8 bits, so the two topmost (unimplemented) bits read back as 0! Consequently, all
remaining DDRx registers get assigned the value 0x3f, which does not match the intention of the developer
in any way.
Which header files are included in my program?
Suppose we have a simple program like
#include <avr/pgmspace.h>
int main (void)
{
return 0;
}
and we want to know which files this #include triggers. Just add option -H to the compiler options and
check what is printed on standard output:
$ avr-gcc -H -S main.c -mmcu=atmega8
. <install>/avr/include/avr/pgmspace.h
.. <install>/avr/include/inttypes.h
... <install>/lib/gcc/avr/<version>/include/stdint.h
.... <install>/avr/include/stdint.h
.. <install>/lib/gcc/avr/<version>/include/stddef.h
.. <install>/avr/include/avr/io.h
... <install>/avr/include/avr/sfr_defs.h
... <install>/avr/include/avr/iom8.h
... <install>/avr/include/avr/portpins.h
...
where <install> denotes the installation path, <version> denotes the GCC version, and the number of dots
indicates the include level, e.g. inttypes.h is included by pgmspace.h.
When -v is added to the compiler options, then the search paths are also displayed (amongst other stuff):
#include '...' search starts here:
#include <...> search starts here:
<install>/bin/../lib/gcc/avr/<version>/include
<install>/bin/../lib/gcc/avr/<version>/include-fixed
<install>/bin/../lib/gcc/avr/<version>/../../../../avr/include
End of search list.
After resolving the ..'s for 'parent directory', the last directory becomes
<install>/avr/include.
Back to FAQ Index.
Which macros are defined in my program? Where are they defined, and to what value?
One way is to add -save-temps and -g3 to the compiler options. This saves the temporary files like the
pre-processed source code in an .i file (for C sources), an .ii (for C++), or a .s (for assembly). A
debug level of DWARF3 or higher is required to include the macro definitions in the file, with lower
debug levels, only the preprocessed source will be present.
For a module with a simple #include <avr/pgmspace.h>, the saved intermediate file might look something
like:
# 0 '<built-in>'
#define __STDC__ 1
The __STDC__ macro is defined built-in in the compiler.
# 0 '<command-line>'
#define __AVR_DEVICE_NAME__ atmega8
The __AVR_DEVICE_NAME__ macro is defined on the command line by means of -D __AVR_DEVICE_NAME__=atmega8.
In this special case, the -D option is added by the specs file specs-atmega8.
# 81 '<install>/avr/include/avr/pgmspace.h' 3
#define __PGMSPACE_H_ 1
#define __need_size_t
The __PGMSPACE_H_ macro is defined in line 81 of that header file. When there is no line note directly
above the definition, go up until you find a line note. For example, the __need_size_t macro is defined
in line 84 of that file.
Back to FAQ Index.
What ISR names are available for my device?
One way to find the possible ISR names is to pre-process a small file, and to grep for possible names,
like in:
$ echo '#include <avr/io.h>' | avr-gcc -xc - -mmcu=atmega8 -E -dM | grep _VECTOR
#define INT0_vect _VECTOR(1)
#define INT1_vect _VECTOR(2)
#define TIMER2_COMP_vect _VECTOR(3)
#define TIMER2_OVF_vect _VECTOR(4)
#define TIMER1_CAPT_vect _VECTOR(5)
...
Explanation:
echo '#include <avr/io.h>'
This prints #include <avr/io.h> to the standard output, which is picked up by the following command
as a C program to be preprocessed.
avr-gcc -xc - -mmcu=atmega8 -E -dM
Set the input language to C, read the program from standard input (specified by a dash), preprocess,
and print all macro definitions to the standard output.
grep _VECTOR
Only print lines with _VECTOR in them.
The output above was actually generated with an additional | sort -t '(' -k 2n so that the vectors
are printed in order.
In order to find the respective vector numbers, use grep _vect_num instead.
Back to FAQ Index.
AVR-LibC Version 2.2.1 FAQ(3avr)