Here is a practical example of how very useful pointers can be. However, it relies on dynamic memory management, so I'm not sure how relevant it is to the discussion here.
Dealing with matrices in C is not particularly fun. The existing libraries range from cryptic (BLAS) to nice (GSL), but they're not exactly developer-friendly. To demonstrate how easy matrix manipulation and management could be in C, I developed the following structures:
typedef double matrix_data;
struct matrix_owner {
long refcount;
size_t size;
matrix_data data[];
};
typedef struct {
int rows;
int cols;
long rowstep;
long colstep;
matrix_data *origin;
struct matrix_owner *owner;
} matrix;
The idea is that a matrix describes the matrix, but a struct matrix_owner contains the matrix data. The origin field always points to some data element in the owner; it is the element at row 0, column 0 (upper left corner) in that particular matrix.
To access matrix *m element at row r, column c , you use m->origin[r*m->rowstep + c*m->colstep]:
static inline matrix_data get_matrix_element(matrix *m, const int row, const int col,
const matrix_data outside)
{
if (m && row >= 0 && col >= 0 && row < m->rows && col < m->cols)
return m->origin[row * m->rowstep + col * m->colstep];
else
return outside;
}
static void set_matrix_element(matrix *m, const int row, const int col,
const matrix_data value)
{
if (m && row >= 0 && col >= 0 && row < m->rows && col < m->cols)
m->origin[row * m->rowstep + col * m->colstep] = value;
}
For example, if you want m2 to be a transposed view to m1 (so that any changes in one are reflected automatically in the other), you simply do
m2.rows = m1.cols;
m2.cols = m1.rows;
m2.rowstep = m1.colstep;
m2.colstep = m1.rowstep;
m2.origin = m1.origin;
m2.owner = m1.owner;
m2.owner->refcount++;
You can create row and column vector views to an existing matrix, diagonal vectors, and even block submatrix views. Any regular rectangular view to an existing matrix is possible. Similar to Unix hard links, all views to matrix data are treated equally. (Note that that is not true with e.g. GSL, which has different types for matrices and views. Here, a matrix is a view to the data; there is no distinction between a matrix and a view to a matrix.)
Whenever you no longer use a matrix, you do need to discard it. However, the data it refers to is only discarded when no other matrix uses it:
static void discard_matrix(matrix *m)
{
if (m) {
if (m.owner) {
if (--(m.owner->refcount) < 1) {
/* No longer needed; discard. */
m.owner->size = 0;
free(m.owner);
}
}
/* Poison matrix; helps detect use-after-discard bugs. */
m.rows = 0;
m.cols = 0;
m.rowstep = 0;
m.colstep = 0;
m.origin = NULL;
m.owner = NULL;
}
}
If one adds two pointers to struct owner, data allocations can be pooled, so that a complicated sub-calculation can be done using a dedicated "pool". When the result matrix is obtained, it is copied (using a deep copy, i.e. copying the data from the struct owner to a new one), and the entire pool can be discarded at once, not bothering with discarding individual temporary matrices. (Such "allocation pool" approach is heavily used by e.g. the Apache HTTPD server. Each request has its own pool, and when completed, the entire pool is discarded. This reduces the risk of memory leaks significantly, without making the code too tedious to maintain.)
The runtime "cost" of this type of structure does not differ much from that of e.g. GSL's gsl_matrix in practice. Yes, this one requires two multiplications per entry rather than just one, but signed integer multiplication is quite optimized in current architectures, and most matrix operations seem to be bottlenecked by cache and RAM speed, not ALU performance. On my Core i5-7200U, the "extra" multiplication seems to vanish in memory latencies in practical code. The versatility and ease of use, in my opinion, is well worth the risk of a slight overhead anyway.