When I started my career in drive control and automation, I tried hard to do the most with the least processor power and the least memory. I learned which data types used the least memory and executed the quickest. We all knew that no one could possibly use or need 64K of memory.
That policy of conserving processor resources saved a lot of cost in most cases (excluding the cost of extra engineering). Several times I ran out of memory. On one project I had less than 300 bytes available on a 32K system. Way too close for comfort.
Often I have wondered if drive-system performance would be improved with a faster scan time. One time, the processor for a high-performance coater drive had a scan time of 45 msec. I got funding to upgrade the processor to one with six times the performance. On that occasion, the improved scan time (45 msec down to 9 msec) did not provide any improvement in tension regulator response. But it did not hurt.
Today, memory chips are inexpensive, but memory modules for control systems still cost thousands per Megabyte. I am not advocating spending an extra $10K per system on memory. I do advocate purchasing enough processor and memory for the application with a minimum of 20 to 40% extra capacity.
I also recommend taking a bit of time to optimize programs to save processor time and memory. This can result in tighter code that is easier to read and understand. However, taking the programming to an ultimate saving in memory and processor time results in such complexity that even the original programmer cannot debug or modify the program several years later.