Well, sort of? What do you mean by "code"?
You should read up on
linking, the process of taking code and data objects and pasting them together into a final executable image/file. This is the final step, after the compiler is done with its work, and sections are very important here!
.text is source
code, i.e., the compiled instructions resulting from program statements and all that. The rest are what variables, arrays, data structures -- any data in general -- is allocated as.
There are different data sections, because some shortcuts can be taken. For example, .bss is all zeroes (on most platforms), so the memory can be wiped on program startup, and that's that -- no need to store a huge pile of zeroes.
Everything else that's initialized to expected values, has to be placed somewhere, so that those values can be copied in from the executable file.
Say your code has a series of variable declarations, e.g.
int foo, bar = 1, baz;
char b = 0, c, d = 0x40;
In general, you can probably expect (but always verify, if you
must use this at all, but please don't actually use this property) that foo, baz, maybe b, and c will be placed in the order they were declared, in .bss; and bar, maybe b, and d will be placed in .data in that order. But not that the variables will ALL be allocated in the order given, because that would be wasteful (the .bss zeroing is a simple for() loop generated by the compiler, or ran by the OS -- it's not going to zero random bytes in a patchwork section, and there's no savings to be had by throwing all the zeroes and initializers together).
The run-time function of sections depends on the targeted architecture. For example, on platforms with memory protection, data sections will typically be tagged as no-execute (so that an attacker can't copy a buffer into the data section and have it executed as code), and code sections will typically be tagged as not-data (instruction fetch only, no data read or write; or data read is okay, I forget, but not writable).
Some platforms take this farther: Harvard architectures, like AVR, fundamentally do not have* a common memory area: they have separate
buses for code and data. The CPU can't read data as code, period, because it's not physically wired to!
*Except they of course do, because it would be a massive wasted opportunity not to. On AVR, this is the LPM instruction (load program memory). The downside is, C assumes a flat (Von Neumann) architecture, so when you declare and access variables stored in this way, you have to use macros to do it, which then compile to the correct addresses and instructions. It's messier than just having sections tagged and letting the OS set memory protections.
Harvard architectures are a bit of a special case, with memory protection being the general case, where any physical memory can be mapped to any logical address, and tagged as any type (code, read-only, read-write..).
On older platforms (like 8086), sections would be divided according to memory segmentation requirements, and the memory model specified to the linker. The 8086 is a real-mode processor, all memory can be trampled by any program. (The only way multiple programs (including the OS itself) can coexist on an 8086, is if they all cooperate together, without trampling each others' memory.) A segment on 8086 holds 64kiB. Usually, a segment is allocated so that variables start at offset zero; I suppose if a segment isn't filled up, the next allocated segment might overlap it, in which case an unbounded array access (ever so easy to do in C) or buffer overflow would trample variables in both segments. At start, the OS (MS-DOS, etc.) reads the EXE file and copies its data (including the code) into the sections listed in the EXE header, then jumps into the code's entry point (which is probably the compiler's initializer code, and then it's on to your code as such).
Or if you're working in assembler, you might not have sections at all. On the 8086, a .COM file is just a flat, <= 64kiB chunk of code and data, which is loaded and jumped into at offset 0x0100. (Normally, you'd use sections to take advantage of even these basic memory management paradigms, even if you technically don't need to do anything at all.) Or on a Harvard architecture, you necessarily have to use sections, to allocate variables and write code, respectively. (Note that assembler goes through the same linking process, and these sections are just hints passed along to the linker -- this is why the linker is critical to understanding the memory model of an executable.
)
Tim