cpu

Results 1 - 25 of 51Sort Results By: Published Date | Title | Company Name
Published By: Intel     Published Date: Sep 27, 2019
As the first major memory and storage breakthrough in 25 years, Intel Optane technology combines industry-leading low latency, high endurance, QoS, and high throughput that allows the creation of solutions to remove data bottlenecks, and unleash CPU utilization. With Intel Optane technology, data centers can deploy bigger and more affordable datasets to gain new insights from large memory pools. Here are just ten way Intel Optane technology can make a difference to your business. To find out more download this whitepaper today.
Tags : 
    
Intel
Published By: Pure Storage     Published Date: Jan 12, 2018
Data is growing at amazing rates and will continue this rapid rate of growth. New techniques in data processing and analytics including AI, machine and deep learning allow specially designed applications to not only analyze data but learn from the analysis and make predictions. Computer systems consisting of multi-core CPUs or GPUs using parallel processing and extremely fast networks are required to process the data. However, legacy storage solutions are based on architectures that are decades old, un-scalable and not well suited for the massive concurrency required by machine learning. Legacy storage is becoming a bottleneck in processing big data and a new storage technology is needed to meet data analytics performance needs.
Tags : 
reporting, artificial intelligence, insights, organization, institution, recognition
    
Pure Storage
Published By: IBM APAC     Published Date: Apr 27, 2018
While relying on x86 servers and Oracle databases to support their stock trading systems, processing rapidly increasing number of transactions fast became a huge challenge for Wanlian Securities. They shifted to IBM FlashSystem that helped them cut average response time for their Oracle Databases from 10 to less than 0.4 milliseconds and improved CPU usage by 15%. Download this case study now.
Tags : 
    
IBM APAC
Published By: VMTurbo     Published Date: Mar 25, 2015
An Intelligent Roadmap for Capacity Planning Many organizations apply overly simplistic principles to determine requirements for compute capacity in their virtualized data centers. These principles are based on a resource allocation model which takes the total amount of memory and CPU allocated to all virtual machines in a compute cluster, and assumes a defined level of over provisioning (e.g. 2:1, 4:1, 8:1, 12:1) in order to calculate the requirement for physical resources. Often managed in spreadsheets or simple databases, and augmented by simple alert-based monitoring tools, the resource allocation model does not account for actual resource consumption driven by each application workload running in the operational environment, and inherently corrodes the level of efficiency that can be driven from the underlying infrastructure.
Tags : 
capacity planning, vmturbo, resource allocation model, cpu, cloud era
    
VMTurbo
Published By: NEC     Published Date: Aug 26, 2014
In addition to high reliability and availability, enterprise mission critical applications, data centers operating 24x7, and data analysis platforms all demand powerful data processing capabilities and stability. The NEC PCIe SSD Appliance for Microsoft® SQL Server® is a best-practice reference architecture for such demanding workloads. It comprises an Express 5800 Scalable Enterprise Server Series with Intel® Xeon® processor E7 v2 family CPUs, high-performance HGST FlashMAX II PCIe server-mounted flash storage, and Microsoft® SQL Server® 2014. When compared with the previous reference architecture based on a server with the Intel® Xeon® processor E7 family CPUs, benchmark testing demonstrated a performance improvement of up to 173% in logical scan rate in a data warehouse environment. The testing also demonstrated consistently fast and stable performance in online transaction processing (OLTP) that could potentially be encountered.
Tags : 
sql, datacenter, servers, virtualization, customer value, analytics, application owners, system integrators, big data, reliability, enterprise, availability, serviceability, processor
    
NEC
Published By: NeoSpire Managed Hosting     Published Date: Sep 01, 2009
When a company creates a custom web application, it also creates a custom challenge. Traditional monitoring does not cover everything, but alerts if something goes wrong with the application are more important than ever. Learn the benefits of custom application monitoring and how NeoSpire can help. . Dramatically reduces the time to troubleshoot a failed web application . Provides complete overview of web transaction and web application performance, to proactively locate and fix bottlenecks in your web system . Improves customer experience and increase customer satisfaction with excellent performance of your web applications and e-business transactions . Eliminates the risk of loss of revenue and credibility . Increases the efficiency of mission-critical e-business operations and web applications
Tags : 
application monitoring, load balancing system, neospire, benefits of testing, hardware monitoring, disk utilization, memory utilization, cpu load, service monitoring, port check (standard and custom ports), number of processes running, specific process monitoring, database monitoring, mysql replication monitoring, oracle rman backup monitoring, vpn monitoring: farend ping test, load-balanced ip monitoring, external url monitoring, centralized ids, logging monitoring
    
NeoSpire Managed Hosting
Published By: Dell Software     Published Date: Aug 15, 2013
Virtualization has changed data centers from static to flowing, and this fluidity has brought challenges to resource allocation. Find out how your organization can stay ahead of the curve. Read the White Paper >>
Tags : 
dell, cpu, memory, storage, vsphere, data center, technology, dynamic
    
Dell Software
Published By: Dell     Published Date: Aug 16, 2013
"Virtualization has changed the data center dynamic from static to fluid. While workloads used to be confined to the hardware on which they were installed, workloads today flow from host to host based on administrator-defined rules, as well as in reaction to changes in the host environment. The fluidic nature of the new data center has brought challenges to resource allocation; find out how your organization can stay ahead of the curve. Read the White Paper"
Tags : 
best practices, cpu, memory, storage, vsphere, virtual
    
Dell
Published By: Intel     Published Date: Nov 13, 2019
The combination of the SAP HANA platform and Intel® Optane™ DC persistent memory means that enterprises now have new ways to approach data tiering and management. Downtime can be significantly reduced because SAP HANA doesn’t have to reload data back into main memory after a server reboot, and more data can remain in the “hot” tier of main memory, which increases the amount of data that can be analyzed at a time. See this new infographic for a quick summary. Intel® Optane™ DC persistent memory also benefits SAP HANA by providing a higher memory density than DRAM, in addition to being byte-addressable. Up to three terabytes of Intel® Optane™ DC persistent memory is available per CPU socket on systems configured with future Intel® Xeon® Gold processors and future Intel® Xeon® Platinum processors, which means that an eight-socket server can contain up to 24 terabytes of Intel® Optane™ DC persistent memory.
Tags : 
    
Intel
Published By: Oracle Corp.     Published Date: May 06, 2013
Building Large-Scale eCommerce Platforms With Oracle
Tags : 
oracle, ecommerce
    
Oracle Corp.
Published By: Vertica     Published Date: Jan 19, 2010
Pink OTC Market Inc. is the third largest U.S. equity trading marketing place. Learn how Pink OTC built a highly available and highly reliable (no downtime in a year of production use) data warehouse using Vertica's Analytic DBMS that cost-effectively stores billions of records and scales easily by simply adding CPUs without incurring additional licensing fees.
Tags : 
pink, adr, data warehousing, vertica, dbms, database, analytical applications
    
Vertica
Published By: ProfitBricks     Published Date: Jan 01, 2013
Performance testing and benchmarking of cloud computing platforms is a complex task, compounded by the differences between providers and the use cases of cloud computing users. IaaS services are utilized by a large variety of industries and, performance metrics cannot be understood by simply representing cloud performance with a single value. When selecting a cloud computing provider, IT professionals consider many factors: compatibility, performance, cost, security and more. Performance is a key factor that drives many others including cost. In many cases, 3 primary bottlenecks affect server performance: central processing unit (CPU) performance, disk performance, and internal network performance.
Tags : 
cloud computing, iaas, profitbricks, amazon ec2, rackspace cloud, central processing unit, disk performance, internal network performance
    
ProfitBricks
Published By: DigitalOcean, LLC     Published Date: Nov 04, 2019
DigitalOcean commissioned Cloud Spectator to evaluate the performance of virtual machines (VMs) from three different Cloud Service Providers (CSPs): Amazon Web Services, Google Compute Engine and DigitalOcean. Cloud Spectator tested the various VMs to evaluate the CPU performance, Random Access Memory (RAM) and storage read and write performance of each provider’s VMs. The purpose of the study was to understand Cloud service performance among major Cloud providers with similarly-sized VMs using a standardized, repeatable testing methodology. Based on the analysis, DigitalOcean’s VM performance was superior in nearly all measured VM performance dimensions, and DigitalOcean provides some of the most compelling performance per dollar available in the industry.
Tags : 
    
DigitalOcean, LLC
Published By: Intel     Published Date: Nov 14, 2019
You can migrate live VMs between Intel processor-based servers but migration in a mixed CPU environment requires downtime and administrative hassle A study commissioned by Intel Corp. One of the greatest advantages of adopting a adopting a public, private, or hybrid cloud environment is being able to easily migrate the virtual machines that run your critical business applications—within the data center, across data centers, and between clouds. Routine hardware maintenance, data center expansion, server hardware upgrades, VM consolidation, and other events all require your IT staff to migrate VMs. For years, one powerful tool in your arsenal has been VMware vSphere® vMotion®, which can live migrate VMs from one host to another with zero downtime, provided the servers share the same underlying architecture. The EVC (Enhanced vMotion Compatibility) feature of vMotion makes it possible to live migrate virtual machines even between different generations of CPUs within a given architecture.
Tags : 
    
Intel
Published By: Dell EMC     Published Date: Nov 09, 2015
Download this white paper and learn how the Dell Hybrid HPC solution delivers a hybrid CPU and GPU compute environment with the PowerEdge C6320 and C4130 to: - Optimize workloads across CPU/GPU servers - Deliver the highest-density, highest-performance in a small footprint - Provide significant power, cooling and resource utilization benefits - Lower cost of ownership and enhance reliability through integrated Dell Remote Access Controller (iDRAC) and Lifecycle Controller
Tags : 
    
Dell EMC
Published By: Amazon Web Services     Published Date: Apr 16, 2018
Since SAP introduced its in-memory database, SAP HANA, customers have significantly accelerated everything from their core business operations to big data analytics. But capitalizing on SAP HANA’s full potential requires computational power and memory capacity beyond the capabilities of many existing data center platforms. To ensure that deployments in the AWS Cloud could meet the most stringent SAP HANA demands, AWS collaborated with SAP and Intel to deliver the Amazon EC2 X1 and X1e instances, part of the Amazon EC2 Memory-Optimized instance family. With four Intel® Xeon® E7 8880 v3 processors (which can power 128 virtual CPUs), X1 offers more memory than any other SAP-certified cloud native instance available today.
Tags : 
    
Amazon Web Services
Published By: Red Hat     Published Date: Jan 01, 2013
There’s a reason Red Hat Enterprise Linux is consistently chosen as the operating platform for industry-standard performance benchmarks. Red Hat Enterprise Linux 6 is designed to deliver scale-out and scale-up performance for small and large servers, offering documented scalability up to 4,096 CPUs and 64 Terabytes of RAM.
Tags : 
database, red hat enterprise, small servers, large servers
    
Red Hat
Published By: Intel Corporation     Published Date: Aug 25, 2014
Sponsored by: NEC and Intel® Xeon® processor Servers with the Intel® Xeon® processor E7 v2 family in a four-CPU configuration can deliver up to twice the processing performance, three times the memory capacity, and four times the I/O bandwidth of previous models. Together with their excellent transaction processing performance, these servers provide a high level of availability essential to enterprise systems via advanced RAS functions that guarantee the integrity of important data while also reducing costs and the frequency of server downtime. Intel, the Intel logo, Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation in the U.S. and/or other countries.
Tags : 
enterprise systems, platform, datacenter, servers, virtualization, customer value, analytics, application owners, system integrators, big data, reliability, enterprise, availability, serviceability, processor, performance testing, server virtualization
    
Intel Corporation
Published By: Gigamon     Published Date: Oct 19, 2017
Read the IDG Tech Dossier, A Security Delivery Platform Benefits the Entire Organization to learn how a comprehensive, well-integrated security platform provides the foundation for the next generation of cybersecurity. By uniting a variety of security solutions and appliances for efficient operation through network visibility and security workflow orchestration, organizations benefit from continuous and pervasive network visibility, fault tolerance and scaling and optimal CPU utilization – thereby improving security and reducing cost. Download now!
Tags : 
    
Gigamon
Published By: IBM     Published Date: May 04, 2009
Live migration is essential for a dynamic data center environment. But until now, it's been impossible between different generations of processors. Intel IT and End User Platform Integration have done so successfully with Intel® Virtualization Technology FlexMigration. This white paper explains how.
Tags : 
ibm, express seller, virtualization intel, xeon, processor, 5500, bladecenter, system x, power, energy costs, hardware, cooling, flexmigration, processors, proof-of-concept, poc, virtual machines, vms, virtual machines, vmware esx 3.5u2
    
IBM
Published By: WIN Enterprises     Published Date: Jan 06, 2011
Paper provides background on Intel Atom processor for embedded market, the next-generation Atom Pineview CPUs, and platform-level solutions.
Tags : 
embedded atom processor, pineview processor, networking, firewall, network security, ip pbx, anti-virus, vpn
    
WIN Enterprises
Published By: Intel     Published Date: Sep 27, 2019
As data constantly changes and expands, data centers increasingly face capacity, performance, and cost limitations related to existing memory and storage solutions. Intel Optane data center (DC) technology addresses these challenges by placing data closer to the CPU and closing the gap between traditional memory and storage options, thus transforming the memory and storage tier.
Tags : 
    
Intel
Published By: Trend Micro SaaS     Published Date: Oct 08, 2008
Businesses are experiencing a dramatic increase in spam and email-based attacks. These assaults not only hurt employee productivity, they consume valuable IT staff time and infrastructure resources. These threats can also expose organizations to data leaks, compliance issues and legal risks. Trend Micro's SaaS email security solution blocks spam, viruses, phishing, and other email threats before they touch your network, helping you reclaim IT staff time, end-user productivity, bandwidth, mail server storage and cpu capacity. Optional content filtering enforces compliance and helps prevent data leaks.
Tags : 
saas, trend, trend micro, software as a service, email threats, mail server, productivity, bandwidth, trendlabs, email security, security, interscan messaging, service level agreement, sla, virus, spam, phishing, distributed denial of service, ddos, filtering
    
Trend Micro SaaS
Published By: Inside HPC Media LLC     Published Date: Nov 11, 2019
In a world characterized by ever-increasing generation and consumption of digital information, the ability to analyze data to find insights in real time has become a competitive advantage. An advanced network must address how best to transfer growing amounts of data quickly and efficiently, and how to perform analysis on that data on-the-fly. The Co-Design technology transition has revolutionized the industry, clearly illustrating that the traditional CPU-centric data center architecture, wherein as many functions as possible are onloaded onto the CPU, is outdated. The transition to the new data-centric architecture requires that today’s networks must be fast and they must be efficient, which means they must offload as many functions as possible from the CPU to other places in the network, enabling the CPU to focus on its primary role of handling the compute.
Tags : 
    
Inside HPC Media LLC
Published By: Pure Storage     Published Date: Oct 09, 2018
Flash-Storage dringt immer mehr in die Mainstream-Datenverarbeitung vor. Dadurch verstehen die Unternehmen zunehmend nicht nur die Performance-Vorteile, sondern auch die sekundären wirtschaftlichen Vorteile, die mit einer Flash-Implementierung im großen Maßstab verbunden sind. Dank der Kombination all dieser Vorteile – geringere Latenzen, höherer Durchsatz und größere Bandbreite, höhere Storage-Dichten, deutlich geringerer Energie- und Platzbedarf, höhere CPU-Auslastung, geringerer Bedarf an Servern und damit verbunden geringere Softwarelizenzgebühren, geringere Administrationskosten und eine größere Zuverlässigkeit auf Einheitenebene – erweist sich die Verwendung von AFAs als eine wirtschaftlich überzeugende Alternative zu den herkömmlichen Storage -Architekturen, die ursprünglich für Festplattenlaufwerke (HDDs) konzipiert waren. Während die Wachstumsraten für Hybrid-Flash-Arrays (HFAs) und für ausschließlich in Verbindung mit HDDs verwendete Arrays steil nach unten gehen, gehören die
Tags : 
    
Pure Storage
Start   Previous   1 2 3    Next    End
Search      
Already a subscriber? Log in here
Please note you must now log in with your email address and password.