Snynet Solution Logo
MON - SUN: 10 AM - 6 PM
+60 11 5624 8319

Blog

Apple M1 gives a glimpse of pro credentials as it shines in enterprise benchmark

Image Description

More impressive performance metrics are emerging regarding the new Apple M1 chip, suggesting that the latest MacBook Air and MacBook Pro devices, both of which are based on the new processor, will make impressive business laptops.

Greg Smith of Crunchy Data wanted to test how the new M1 would perform when running the open-source database management solution PostgreSQL and just as with other benchmarks, the M1 outperformed expectations. Compared with previous MacBook Pro models, the 2020 M1 iteration came out clearly on top, delivering 32K single/92K all core transactions per second.

Smith also added two generations of AMD's Ryzen desktop hardware into his comparison, as well as some Intel mini-PCs. While the M1 was never going to outperform high-end desktop processors, it more than held its own, particularly when client numbers were towards the lower end.

Outperforming expectations

“If Apple can push the M1 design into larger amounts of memory and add a few more cores, it could be a fierce midsize server competitor,” Smith said. “That's not going to disrupt the big industry push toward hosting things on giant cloud systems, where data centers want >=48 processors for a server to be worth installing. There are cloud-scale ARM servers out there, and Apple's ARM instruction set Macs make developing for that platform easier. I'm looking forward to the competition of a four-way race between Intel, AMD, Apple, and the other ARM designers.”

There will be teething problems along the way for Apple, particularly as the business community gets used to its new processor. Compatibility issues are to be expected and it will be interesting to see how Apple responds to these, and how quickly.

M1 Macs may not be suitable for all businesses, but once a few of the kinks have been ironed out, developers will gain access to a seriously impressive piece of kit.

Via Crunchy Data

Date

24 Nov 2020

Sources


Share


Other Blog

  • One of the world's largest supercomputers lived for only 10 minutes

    There was a time when supercomputers were only available to a handful of organizations, mostly governments, public research facilities and scientific bodies. The rise of the cloud computing and the widespread availability of sophisticated cloud workload management (CWM) tools have reduced the barrier of entry considerably.

    Only last week, YellowDog, a CWM outfit based in Bristol, United Kingdom, assembled a virtual super computer using its proprietary platform and at its peak, which lasted about 10 minutes, it had mustered an army of more than 3.2 million vCPUs.

    While it was nowhere as powerful as Fugaku, that was enough to propel it into the top 10 of the world's fastest supercomputers, at least for a few minutes.

    Short-lived

    The provisioning, which was done on behalf of a pharmaceutical company, help to ran a popular drug discovery application as a single cluster. Back of the envelope calculation puts the raw cost of the project at about $65,000.

    That's accounting for 33,333 instances of AWS 96-core c5.24xlarge instance, one of the instances used during the run (essentially similar to baremetal servers or dedicated servers ) at $1.6013 per hour. So that's $53,376 per hour or $57,824 to account for the entire length of the session (65 minutes in all).

    "With access to this on-demand supercomputer, the researchers were able to analyze and screen 337 million compounds in 7 hours. To replicate that using their on-premises systems would have taken two months," said Colin Bridger from AWS.

    Cloud-agnostic

    What's extraordinary is that this sort of firepower is available to anybody who can afford it. And it is based on the sort of hardware that runs our cloud computing world: web hosting, website builders, cloud storage, email services among others.

    CWM platforms have evolved over the years to develop algorithms and machine learning capabilities to choose the best source of compute, regardless of its origin or type.

    For example, one cloud provider may have the lowest cost spot compute, but this would not be chosen if were not available in the territory set by the customer, or if there were not actually a sufficient number of servers of the required instance type available within that cloud provider, in this case another source of compute would be chosen.

    Read More
  • Microsoft Teams update could soon give you a whole new view on your calls

    Get a completely different view on your Microsoft Teams calls.

    Read More
  • More clues appear to link Supernova web shell activity to Chinese hackers

    Secureworks researchers has observed two cases in which the Chinese threat group Spiral used compromised severs to deploy a malicious web shell.

    Read More
  • AWS is making it simple to use the same dataset across multiple applications

    The new capability makes it easier to process data between multiple applications.

    Read More

Find Out More About Us

Want to hire best people for your project? Look no further you came to the right place!

Contact Us