Looked at this page about the scanf () procedure from C++. The table indicates that the format specifier code for scanf () “% d” is to read an integer decimal number. The same is stated for “% i”.
So, the question is: what is their difference?
UPD: I would like to know the same about format codes for floating point numbers
Answer 1, authority 100%
d
– expects at the input a string with an integer in decimal notation
i
– expects a string with an integer in decimal, octal (starts with 0
) or hexadecimal (0x
) number system as input
f, e, g
is a real number, there is no difference between conversion specifiers
The scanf
and printf
functions are covered in almost every C / C++ reference. For example:
- Harbison, Steele. Programming language C. – M .: OOO “Binom-Press”, 2004
- Lieschner, C++. Reference
Answer 2, authority 98%
If used in the scanf
family then d
is equivalent to calling strtol with radix 10
, and i
is equivalent to calling strtol
with radix 0
(which means try to figure out the number system yourself)
If used in the printf
family, then there is no difference between d
and i
.
From the C11 standard:
d, i The int argument is converted to signed decimal in the style
[-] dddd. The precision specifies the minimum number of digits to
appear; if the value being converted can be represented in fewer
digits, it is expanded with leading zeros. The default precision is 1.
The result of converting a zero value with a precision of zero is no
characters.
More information on the various formats can be found here