Author Topic: Binary vs. Hex in in source  (Read 5688 times)

0 Members and 1 Guest are viewing this topic.

Offline robotix3Topic starter

  • Regular Contributor
  • *
  • Posts: 90
  • Country: us
Binary vs. Hex in in source
« on: January 21, 2021, 05:32:39 pm »
I've been wondering for a while why most programmers use hexadecimal in their source code when representing a register mapping or something similar where you need to convert back to binary to have any idea of what the line of code is doing (or have to re-convert when making a modification), such as when an IC uses each bit for a different function. I tend to just type out the binary representation instead (0b00101000 vs. 0x40).

Is there an advantage in using hexadecimal or is this just a learned habit?
 
The following users thanked this post: BitsnBytes

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14911
  • Country: fr
Re: Binary vs. Hex in in source
« Reply #1 on: January 21, 2021, 05:49:32 pm »
Is there an advantage in using hexadecimal or is this just a learned habit?

An obvious one: the probability of mistyping a binary-coded constant is MUCH higher than mistyping a hexadecimal one.
Another one is readability. Yes, actually, hex constants are much more readable than binary ones. Try and convince me that you can spot a given bit (by its index) in a longish binary constant. Whereas figuring that out with a hex constant, if you half know your hexadecimal, is much easier.

 
The following users thanked this post: newbrain

Offline Ian.M

  • Super Contributor
  • ***
  • Posts: 12981
Re: Binary vs. Hex in in source
« Reply #2 on: January 21, 2021, 05:54:15 pm »
Also, because the C standard committee rejected binary constants: "A proposal to add binary constants was rejected due to lack of precedent and insufficient utility." (line 30,page 51 (58 in PDF) [here]), so they are a proprietary extension to the standard (although widely implemented), the use of which may be prohibited by corporate and/or industry C coding standards. 
 
The following users thanked this post: newbrain

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11484
  • Country: us
    • Personal site
Re: Binary vs. Hex in in source
« Reply #3 on: January 21, 2021, 06:15:35 pm »
C standard committee is hopeless. What lack of precedent? All modern languages support binary constants.

But yes, hex is much easier to read and interpret mentally, especially on constants larger than 8 bits.
Alex
 
The following users thanked this post: Whales

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4238
  • Country: nz
Re: Binary vs. Hex in in source
« Reply #4 on: January 21, 2021, 06:24:54 pm »
I've been wondering for a while why most programmers use hexadecimal in their source code when representing a register mapping or something similar where you need to convert back to binary to have any idea of what the line of code is doing (or have to re-convert when making a modification), such as when an IC uses each bit for a different function. I tend to just type out the binary representation instead (0b00101000 vs. 0x40).

Is there an advantage in using hexadecimal or is this just a learned habit?

It's very hard to understand exactly which bit numbers are set in a long binary number without laboriously counting them. You might get away with it for an 8 bit number, but it's brutal for a 16 or 32 bit value.

0b01001010010100000100010101000111  -- what the heck is that?

0x4a504547 -- most experienced programmers will immediately recognize this is 4 uppercase ASCII characters

At least with C++14 you can write the binary literal as 0b0100'1010'0101'0000'0100'0101'0100'0111 which helps a little, but this isn't accepted by C, even with gnu extensions.

 
The following users thanked this post: newbrain

Offline jpanhalt

  • Super Contributor
  • ***
  • Posts: 3625
  • Country: us
Re: Binary vs. Hex in in source
« Reply #5 on: January 21, 2021, 06:26:21 pm »
I don't write C, but aren't the shift operations inherently binary?

Code: (C) [Select]

    // The result is 00000010
 
    printf("a>>1 = %d\n", a >> 1);
 
    // The result is 00000100
    printf("b>>1 = %d\n", b >> 1);
    return 0;

The decision not to allow binary in C seems oxymoronic, if such shifts are allowed.  Surely, they are not shifting hexadecimal characters.

In Assembly, I use hex when convenient and obvious, e.g., when using byte instructions such as IORLW 0xB0 when setting a new page command for some GLCD.  When setting bits in certain registers, e.g., some of the set-up registers, I use binary as the result is more obvious.  The TS didn't say which language he was using; although, I agree it is probably C.

 

Offline m k

  • Super Contributor
  • ***
  • Posts: 2246
  • Country: fi
Re: Binary vs. Hex in in source
« Reply #6 on: January 21, 2021, 06:29:53 pm »
Is there an advantage in using hexadecimal or is this just a learned habit?

I see it in binary.

How many bit you had in mind, 16?
Advance-Aneng-Appa-AVO-Beckman-Danbridge-Data Tech-Fluke-General Radio-H. W. Sullivan-Heathkit-HP-Kaise-Kyoritsu-Leeds & Northrup-Mastech-REO-Simpson-Sinclair-Tektronix-Tokyo Rikosha-Topward-Triplett-YFE
(plus lesser brands from the work shop of the world)
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11484
  • Country: us
    • Personal site
Re: Binary vs. Hex in in source
« Reply #7 on: January 21, 2021, 06:31:01 pm »
For delimiters I personally would prefer underscores (and groupings of 8 ): 0b01001010_01010000_01000101_01000111.

But again, I would not use binary for actual arbitrary constants, just something that is naturally has one or few bits set.

But in practice, I use GCC extensions all the time, so standards compatibility is not an issue for me. Yet, I don't think I've ever used binary. Hex works fine.
« Last Edit: January 21, 2021, 06:32:34 pm by ataradov »
Alex
 

Offline Syntax Error

  • Frequent Contributor
  • **
  • Posts: 584
  • Country: gb
Re: Binary vs. Hex in in source
« Reply #8 on: January 21, 2021, 06:44:03 pm »
@robotix3 It's habbit. I often use decimals, which is really annoying for the hex nerds. Yep, that's 255 not 0xFF. Binary is great for seeing which bits are set. You can mix and match!

When it comes to setting registers etc, programmers should use any predefined symbols:
Code: [Select]
#define PORTB #0e
#define DDR3 1
// set portb pin 3 to output
PORTB = ( 1 < DDR3 )

#define LEDRED 0b00000100
...
// set pin 3 to high/on
PORTB &= ( 1 <<  LEDRED )

+I should add when it comes to address ranges (which can be in gigabytes) hex is always used. For example UBI_SPACE=0xa00000100000-0xa00007f80000
Remember the default network MAC address of 0xDE:AD:BE:EF:FE:ED :)
« Last Edit: January 21, 2021, 08:13:36 pm by Syntax Error »
 

Offline PlainName

  • Super Contributor
  • ***
  • Posts: 7040
  • Country: va
Re: Binary vs. Hex in in source
« Reply #9 on: January 21, 2021, 07:55:39 pm »
Hex is easier to read and successfully write. Strings of 0s and 1s mean nothing.

But it depends on context. If I'm defining, say, a character bitmap then using binary shows the actual character there on the paper whereas hex would just be indecipherable. There are other similar situations, so the proper answer to this question is a bit like 'should I use gotos': generally not, but where it's appropriate, sure.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4238
  • Country: nz
Re: Binary vs. Hex in in source
« Reply #10 on: January 21, 2021, 11:34:24 pm »
For delimiters I personally would prefer underscores (and groupings of 8 ): 0b01001010_01010000_01000101_01000111.

Sadly we don't get that option, as the C++ people consciously decided not to follow Ada. The problem, as I understand it, is that _ starts a new identifier.

Quote
But again, I would not use binary for actual arbitrary constants, just something that is naturally has one or few bits set.

And yet hex constants such as 0x00040000 are super-easy to understand.

Quote
But in practice, I use GCC extensions all the time, so standards compatibility is not an issue for me. Yet, I don't think I've ever used binary. Hex works fine.

As far as I could quickly establish, GNU doesn't have an extension for this.
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3797
  • Country: us
Re: Binary vs. Hex in in source
« Reply #11 on: January 22, 2021, 12:06:38 am »
For delimiters I personally would prefer underscores (and groupings of 8 ): 0b01001010_01010000_01000101_01000111.

Sadly we don't get that option, as the C++ people consciously decided not to follow Ada. The problem, as I understand it, is that _ starts a new identifier.

Ugh.  I first read that and was like "no way, _ isn't anything special, this would only be an issue if you tried to use _ at the beginning of a literal."  In C, _ would be a fine choice and not create any ambiguity.  In C++ it doesn't conflict with identifiers (which can't start with a digit) but with user defined literals which were added in C++11 or 14 and use the underscore syntax.  Basically this means: Sometype t = 0x1234_abc is transformed into a call to Sometype::operator""_abc(0x1234).  It looks like it is supposed to mimic the type tags such as 1LL.  I'm sure that is useful, although I for one would prefer legible separator characters.
 
The following users thanked this post: newbrain, SiliconWizard

Offline sleemanj

  • Super Contributor
  • ***
  • Posts: 3030
  • Country: nz
  • Professional tightwad.
    • The electronics hobby components I sell.
Re: Binary vs. Hex in in source
« Reply #12 on: January 22, 2021, 12:39:25 am »
Hex is easier to read and successfully write. Strings of 0s and 1s mean nothing.

"I want to set bits 1, 2 and 5", 0b00010011 or 0x13, for me it's the binary that's faster and clearer both to read and write.  But I've never been good at hex conversion in my head.

Equally "which bits are set by 0x44", requires either considerable brain cycles (or finding the calculator), while "which bits are set by 0b01000100" requires zero brain cycles because that's the answer already.

As others say though once you get past 8 bits then binary representation can get very unwieldy, humans are good with groups of between 7 to 10 things maximum, bits in this case, more than that and we lose track easily.


 
~~~
EEVBlog Members - get yourself 10% discount off all my electronic components for sale just use the Buy Direct links and use Coupon Code "eevblog" during checkout.  Shipping from New Zealand, international orders welcome :-)
 
The following users thanked this post: Whales

Offline MIS42N

  • Frequent Contributor
  • **
  • Posts: 515
  • Country: au
Re: Binary vs. Hex in in source
« Reply #13 on: January 22, 2021, 01:26:34 am »
Surely it's whatever is appropriate for the use. Here's some code I wrote to set up a DAC. I used binary and decimal constants:

; --- DAC ---
; Analog input on RC2 - all RCx inputs are analog at reset, only
; RC2 is used as input so no need to change ANSEL. Get the comparator
; +ve input from the DAC - set to approx 1.9V
; V = (DACCON0/32)*5[Vdd]
   BANKSEL   DACCON0
   MOVLW   D'12' ; set the voltage
   MOVWF   DACCON1
   MOVLW   B'10000000' ; enable DAC, source Vdd
; 1 x 0 0 0 0 x x
; |   | | -+-
; |   | |  +----- 00 = DAC Positive Source Vdd
; |   | +--------  0 = DAC not connected to the DACOUT2 pin
; |   +----------  0 = DAC not connected to the DACOUT1 pin
; +--------------  1 = DAC is enabled
   MOVWF   DACCON0

But in the code where masks were used, hex made sense

   MOVF    PWM,W   ; Add the least significant
   ADDWF   Dithr,F   ; 14 bits of PWM to Dithr
   MOVF    PWM+1,W
   ANDLW   0x3F ; truncate to top 6 of 14 bits
   BTFSC   STATUS,C
   ADDLW   0x01 ; add in carry
   ADDWF   Dithr+1,W ; add to the current Dithr
   BTFSC   WREG,6 ; was there 14 bit overflow
   BSF   STATUS,C
   ANDLW   0x3F ; truncate to top 6 of 14 bits
   MOVWF   Dithr+1
   MOVF    PWM+1,W ; now work out the top 10 bits
   ANDLW   0xC0 ; get least significant two bits of 10
   BTFSC   STATUS,C
   ADDLW   0x40 ; yes, increase duty cycle 1 bit

I just use whatever base makes the most sense when I come back to read it later.
 
The following users thanked this post: LA7SJA, lwatts666

Offline Berni

  • Super Contributor
  • ***
  • Posts: 5010
  • Country: si
Re: Binary vs. Hex in in source
« Reply #14 on: January 22, 2021, 07:30:42 am »
It sort of makes sense at 8bit but when you get to 32bit the binary representation is just one huge snake of 1s and 0s. C was sort of designed for machines larger than 8bit.

But i don't see why modern C standards would not include binary number literals. It can be pretty useful given that C is the de-facto standard for low level MCU development.

Heck even C# has it! Even allows you to put the underscore delimiters anywhere you like. Yet it's high level interpreted language that is so far from the hardware that it doesn't even care what instruction set the CPU is running. If anything C# shouldn't have this, yet it has (probably because it was so easy to do and doesn't get in the way if you don't use it)
Code: [Select]
int myValue = 0b0010_0110_0000_0011;
Still HEX is not that hard to read once you get used to it. A great way to assist you in mentally converting between hex and binary is printing out a table similar to this (but hopefully in a bit more readable form, this one is not that great) and leaving it somewhere near your monitor. You will quickly memorize the common ones, and if you forget a more twisty one like 0xD you can just glance at the table.
« Last Edit: January 22, 2021, 07:32:27 am by Berni »
 

Offline m k

  • Super Contributor
  • ***
  • Posts: 2246
  • Country: fi
Re: Binary vs. Hex in in source
« Reply #15 on: January 22, 2021, 09:09:05 am »
D is clearly C + 1, like B is A + 1.

Here over a byte is generally hex but least 8 bits can differ, like setting as 64 + 32.

Notation is also important.
Those whitespaces must be there and adding -1 must also be visible.
(capital mnemonics, bad rap)
Advance-Aneng-Appa-AVO-Beckman-Danbridge-Data Tech-Fluke-General Radio-H. W. Sullivan-Heathkit-HP-Kaise-Kyoritsu-Leeds & Northrup-Mastech-REO-Simpson-Sinclair-Tektronix-Tokyo Rikosha-Topward-Triplett-YFE
(plus lesser brands from the work shop of the world)
 

Offline ledtester

  • Super Contributor
  • ***
  • Posts: 3116
  • Country: us
Re: Binary vs. Hex in in source
« Reply #16 on: January 22, 2021, 09:50:41 am »
In some cases it makes sense to represent constants in octal.
 

Offline Ian.M

  • Super Contributor
  • ***
  • Posts: 12981
Re: Binary vs. Hex in in source
« Reply #17 on: January 22, 2021, 12:20:03 pm »
In some cases it makes sense to represent constants in octal.
If you aren't coding for a processor with 3*N bits per word, and your code uses octal constants *KILL* *IT* *WITH* *FIRE* !!! 
 
The following users thanked this post: Shock, rs20

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16906
  • Country: us
  • DavidH
Re: Binary vs. Hex in in source
« Reply #18 on: January 22, 2021, 04:55:05 pm »
I freely switch between binary and hexadecimal constants depending on which is more applicable in each case.  Sometimes I will use multiple binary constants added together to better show what is going on.
 

Offline PlainName

  • Super Contributor
  • ***
  • Posts: 7040
  • Country: va
Re: Binary vs. Hex in in source
« Reply #19 on: January 22, 2021, 05:26:39 pm »
Hex is easier to read and successfully write. Strings of 0s and 1s mean nothing.

"I want to set bits 1, 2 and 5", 0b00010011 or 0x13, for me it's the binary that's faster and clearer both to read and write.  But I've never been good at hex conversion in my head.

As I said, where appropriate it's appropriate :)

However, that example is using magic bits and  I think you would be better in that instance to go something like:

Code: [Select]
some_reg = BIT_1 | BIT_2 | BIT_5;
Not least because then you can go:

Code: [Select]
some_reg = BIT_1  //!< Invert display
          | BIT_2  //!< 0,0 is top left
          | BIT_5;  //!< Interrupt on refresh

But then you'd probably be better going:

Code: [Select]
some_reg = DISP_INVERT
          | DISP_ORIG_TOPLEFT
          | DISP_REFR_INT_EN;

Just as examples - you particular circumstance would dictate, of course, but it illustrates that binary isn't necessarily what you  want despite it being what you're thinking.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14911
  • Country: fr
Re: Binary vs. Hex in in source
« Reply #20 on: January 22, 2021, 05:49:39 pm »
Hex is easier to read and successfully write. Strings of 0s and 1s mean nothing.

"I want to set bits 1, 2 and 5", 0b00010011 or 0x13, for me it's the binary that's faster and clearer both to read and write.  But I've never been good at hex conversion in my head.

As I said, where appropriate it's appropriate :)

However, that example is using magic bits and  I think you would be better in that instance to go something like:

Code: [Select]
some_reg = BIT_1 | BIT_2 | BIT_5;
Not least because then you can go:

Code: [Select]
some_reg = BIT_1  //!< Invert display
          | BIT_2  //!< 0,0 is top left
          | BIT_5;  //!< Interrupt on refresh

But then you'd probably be better going:

Code: [Select]
some_reg = DISP_INVERT
          | DISP_ORIG_TOPLEFT
          | DISP_REFR_INT_EN;

Just as examples - you particular circumstance would dictate, of course, but it illustrates that binary isn't necessarily what you  want despite it being what you're thinking.

Fully agree. This makes the overall debate about numeric constants pointless, as they are, as you pointed out, the worst possible way of expressing set bits.

Writing constants the way you show above is the right way of approaching it IMO. Some languages have have better support for this than others, but through simple macros, as you showed, this is still very easy to do in C.

Sure it takes more "typing" than a single numeric constant. But, as we can see over and over again in discussions about programming languages, this is a moot point IMHO. I'll take readability and maintainability over saving a few keystrokes (while taking much more time to figure out the required constant in binary or hex...) You don't even save time by willing to save keystrokes here, it's usually even the contrary. Not to talk of course, about readability when you have to figure out later on what these bits are actually meaning.

Although I don't mind using macros in C, some languages do have better support for this. Two that come to mind are Oberon (and probably a couple of other Pascal derivatives), and Ada.

In Oberon, there is a SET type that allows to express "bit maps" easily. For instance, the above example with bits 1, 2 and 5 set would be: { 1..2, 5 }. Neat.

In Ada, you can "map" records to bit fields, but, contrary to C, with a guaranteed mapping with no ambiguity or implementation-dependant behavior.

That said, if you don't care about portability and know how bit fields are implemented on your particular platform/compiler, using bit fields in C is also an option.
 
The following users thanked this post: PlainName

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8319
  • Country: fi
Re: Binary vs. Hex in in source
« Reply #21 on: January 22, 2021, 06:37:57 pm »
I think the #1 reason is the original lack in C standard.

Universal compiler support for 0b literals is relatively new.

Many C programmers have been writing C in 1990's, if not earlier! With no binary literals available and a lot of low-level bit manipulation to do, they/we have learned to convert between hex and binary quickly and intuitively in our heads. Meaning when you see 'a', you immediately think about 1010. And conversely, if you think about 1010, you can just write down 'a', no need to think about it.

Optional delimiters would help with readability.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4238
  • Country: nz
Re: Binary vs. Hex in in source
« Reply #22 on: January 22, 2021, 08:15:08 pm »
In some cases it makes sense to represent constants in octal.
If you aren't coding for a processor with 3*N bits per word, and your code uses octal constants *KILL* *IT* *WITH* *FIRE* !!!

Octal makes sense for the encoding of instructions on CPUs with 8 registers and 8 addressing modes, such as the PDP-11, M68000, and x86 (the MOD-REG-R/M for example).

Octal was actually traditional on the PDP-11. I seem to recall octal digits didn't line up with the 3 bit fields in 68000 instructions, and Motorola used hex anyway. But octal would have made sense for x86 (and I think 8080/z80 too)
« Last Edit: January 22, 2021, 11:56:11 pm by brucehoult »
 

Offline HwAoRrDk

  • Super Contributor
  • ***
  • Posts: 1539
  • Country: gb
Re: Binary vs. Hex in in source
« Reply #23 on: January 22, 2021, 11:09:26 pm »
Also, because the C standard committee rejected binary constants: "A proposal to add binary constants was rejected due to lack of precedent and insufficient utility." (line 30,page 51 (58 in PDF) [here]), so they are a proprietary extension to the standard (although widely implemented), the use of which may be prohibited by corporate and/or industry C coding standards.

They must have changed their minds, because binary constants are back on the cards for C2x. :)
 

Offline newbrain

  • Super Contributor
  • ***
  • Posts: 1742
  • Country: se
Re: Binary vs. Hex in in source
« Reply #24 on: January 23, 2021, 01:28:29 pm »
In some cases it makes sense to represent constants in octal.
And, in fact, you are using octal every time you type the literal 0 in C  ;)
Nandemo wa shiranai wa yo, shitteru koto dake.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf