DFU currently allocates buffer memory at the start of each data transfer
operation and frees it at the end. Especially since memalign() is used to
allocate the buffer, and various other allocations happen during the
transfer, this can expose the code to heap fragmentation, which prevents
the allocation from succeeding on subsequent transfers.
Fix the code to allocate the buffer once when DFU mode is initialized,
and free the buffer once when DFU mode is exited, to reduce the exposure
to heap fragmentation.
The failure mode is:
// Internally to memalign(), this allocates a lot more than s to guarantee
// that alignment can occur, then returns chunks of memory at the start/
// end of the allocated buffer to the heap.
p = memalign(a, s);
// Various other malloc()s occur here, some of which allocate the RAM
// immediately before/after "p".
//
// DFU transfer is complete, so buffer is released.
free(p);
// By chance, no other malloc()/free() here, in DFU at least.
//
// A new DFU transfer starts, so the buffer is allocated again.
// In theory this should succeed since we just free()d a buffer of the
// same size. However, this fails because memalign() internally attempts
// to allocate much more than "s", yet free(p) above only free()d a
// little more than "s".
p = memalign(a, s);
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Lukasz Majewski <l.majewski@samsung.com>
Tested-by: Lukasz Majewski <l.majewski@samsung.com>
void dfu_write_transaction_cleanup(struct dfu_entity *dfu)
{
/* clear everything */
- dfu_free_buf();
dfu->crc = 0;
dfu->offset = 0;
dfu->i_blk_seq_num = 0;
dfu_hash_algo->name, dfu->crc);
puts("\nUPLOAD ... done\nCtrl+C to exit ...\n");
- dfu_free_buf();
dfu->i_blk_seq_num = 0;
dfu->crc = 0;
dfu->offset = 0;
__func__, interface);
return -1;
}
+ dfu_get_buf(dfu);
return 0;
}
{
struct dfu_entity *dfu, *p, *t = NULL;
+ dfu_free_buf();
list_for_each_entry_safe_reverse(dfu, p, &dfu_list, list) {
list_del(&dfu->list);
if (dfu->free_entity)