If you just look at FLOPS Magnitude of Computing you pretty much get the benchmark for computer stats. I tend to have to refer to these stats when I do Rendering in 3d or When we build out a Server Rack for Dialers, or When i want to buy a new computer and I want to know what their stats are. Since NO ONE seems to put their computing stats in FLOPS (shifting standards for measuring computer performance is a b*tch) there is Passmark.
Simply type the Processor and Passmark and you will get the passmark rating. You "Add Up" passmark or FLOPS for multi-core/thread computing. Servers basically fulffil the role of allowing you to Stack as much CPUs in one computer as possible. You want to migrate to FLOPS since its an open standard.
Related to Both Computer and 3d is Graphic Card Processors or GPU (Graphic Processing Units) pretty much the a CPU dedicated to graphics. Its relevance is because Nvidia, like ARM, are now in the CPU industry (doing a Honda where they produced Anciliary and smaller versions of the main industry and growing to the point their technology allows them to compete with the big boys).
So basically Graphic Cards allow 1 computer to have multi-core processing like a Server up to the number of GPU Slots the computer's other parts can take (and boy the wattage and heat generated by these badboys).
Now that you know some basic overview of computing power you can figure out how simple it is to make a computer metric system for games.
Just Get the FLOPS and the Exponent as the Rating. if you go to magnitude of computing you will notice we are in the Giga-Flops range 1 to 900 x10^9. If you check Passmark (Convert it to Flops at around 8894 PM the Intel Core-i7 980X but in FLOPS 147.6 or around 60PM to 1GigaFLop or 1x10^9; Pentium III 311PM to 1.4x10^9 = 230 PM so this is not a very precise system; note that you can "overclock" a computer but I don't know by how much does it improve performance; Note FLOPS and passmarks converted with simple math wont' cut it in the real world but its a great benchmark non-computer scientists).
In classic Traveller you can divide the FLOPS performance ranges into TLs. Servers and few Computers can "stack" their CPUs. Typically they have a Limit in how much can be stacked.
The GM can set a "Standard Computer" (a desktop board) weight, cost and FLOP rating and give it Extra Processing Slots. he can also set "Server" stats (pretty much a heavier Computer designed to fit as a slim pizza box into a "Cabinet" Server Rack) there is also the "blade" server. Smaller more compact computers to fit as "VCR cassete" (but much heavier) sized blades into a specialized server rack.
Pricing seems to double. Standard Computer is about $1000 w/ 1 CPU with variance of performance and cost (only taking GPUs which still need to be modded to do other stuff than graphics). Pizza Box Server is at about $2000 having higher end 2 CPUs that are double the performance of a Computer (effectively x4), a blade server is about $4000 but only comes with 2xCPUs using up one slot out of 16 (these typically are in divisors of 8 or 4) each "blade" costs about $2000 (your are paying for the size reduction).
Note that there are Open Compute Versions of all these. Which I think a Sci-fi setting would be moving towards because of the "NSA" backdoor problem. ?Democratic Anarchy vs Representative Plutocracy?
Programs consume Computing Resource (See your task manager or system manager of your computer) its an accounting thing, that can be simplified by the same Threshold based systems in "Abstract Wealth" system or "Abstract Damage" system. Having certain thresh-holds where every additional burden penalizes the computer's performance till nothing can run (sounds familiar?) and running a program has diminishing returns (allocating a server to run Libre Office doesnt make the experience so much better, but you can open a ton of documents!).
Just Take a number like 4 and say this is the benchmark and standard for X task (where X may be an amazing feat like doing ships accounting in under a minute and market analysis of all future options given the data in the time it takes up upload the data). Without it task will be -4. now for every addition of Y program of B size your 4 is reduced by 1. Every additional program reduces performance by 1 and the benchmark feat degrades.
Its mostly about setting benchmarks in RPG system mechanics. Clearly defined ones that are well thought off are awesome.
++++++++++++
Best CPU to Cost Ratio. A simple way to pick a good cpu is seeing how much passmarks (you can covert it to flops later) to $cost$.Note what kind of CPU this is by clicking on the CPU. It might be a great value but its only 850passmarks (where a 2008 computer averages 1100-1500 passmarks; computers like my newly purchaed notebook is at 3000 5 years later). You can do the same with Graphics Cards.
So Ideally for 3d Rendering I get a $1.2k/Php60,000 gaming computer with a large mother board and use the GPU benchmarks to get 2-4 $160-180 GeForce GTX 560 Ti (probably importing it from HK because it would be a b*tch to import into the Philippines electronic hardware without gross price barriers) Having about 4500+(3500x3) passmarks under the hood for Rendering. this will allow me to use the "rendered" viewport (a very low render view to see if all the lighting and textures are in the right places) while I work. Ouch thats Php100k where am I gonna get that kind of money lolz. To dream...
I wonder if there is an App that allows me to measure my computer (phone and notebook) in FLOPS? oh well.
Sigh, oh open computing. open and transparent standards, would be something of a Sci-fi Discussion. Particularly regarding 3-Laws of Robotics, would it be part of AI hard wiring and tampering with the 3LR in a computer.
added Notes and update
Encryption is Easier to Crack (from MIT).
Overclocking. There are so many variables in architecture that its up to the Game Designer to dictate assumptions about processors and how they are used to push them beyond their safe operating margins (see system bottlenecks in the article).
No comments:
Post a Comment