narkata

V.I.P.
  • Pranešimai

    6.967
  • Užsiregistravo

  • Laimėta dienų

    276
  • Atsiliepimai

    100%

Visas narkata turinys

  1. @Magnitas o žaisi ? dabar net steam jo kaina 3,74£ su vasaros nuolaida
  2. Naudoju eset, ir jokių problemų neturiu, nestabdo telefono, kai naudoju ant telefono visokius Amazon, eBay, PayPal, banko app, Psn store, steam, tai geriau turėti antivirusine, aišku čia jau nuo pačių priklauso ar tau reik, pvz pirmi metai kainuoja eset 9,99£ paskui jau kainuos 6,99£
  3. vos kraunasi tas steam ir kaip visad tuos žaidimus turiu jau, nebent kokiu Indi žaidimu nusipirkti reik https://steamstat.us/ 12,5m prisijungusiu dabar yra
  4. All gamers on the train are scared right now Kaip padidinti parametrus vaizdo kortos AMD User Winter time
  5. SAW8 sugrįžta su nauju pavadinimu Jigsaw http://www.imdb.com/title/tt3348730/?ref_=nv_sr_1
  6. @scalman tai dabar geriau nusipirk ps+ nes žaidimus tikrai gerom nuolaidom nusipirksi pigiau negu diskai
  7. narkata

    Far Cry 5

    98 metais jau buvo teaser šio žaidimo
  8. @gedunex dabar kai paleido PS+ su akcija tai dabar nieko gero neduos PS+ :D:D Svarbu akcijas butu tie kas turi PS+
  9. narkata

    Spoileris :D

    atsidaryk pats retro parduotuve Retave
  10. @cheburator tai jo kad pigiau bus, bet nemanau kad ant pusės
  11. O tai kapeikų išleisi dabar ant žaidimu
  12. Introduction AMD today took wraps off a processor family that constitutes arguably the company's most important launch in years. Codenamed Naples, that family is now known as AMD Epyc. Epyc represents AMD's firm intentions to return to the x86-based server market dominated by Intel after the effective demise of the Opteron chips from a heyday in 2003. Appreciating that Intel holds a near monopoly in server-class CPUs and chipsets, AMD has much to gain and little to lose. We'll discuss the various models, talk through the architecture underpinning Epyc, evaluate benchmarks provided by AMD on a Tech Day yesterday, and then see if Epyc, as an ecosystem, has the wherewithal to challenge the incumbent Intel Xeon chips that hold market-share hegemony. Model Numbers Daugiau Info Going from the highest levels of details and working our way in, AMD productises Epyc into a 7000-series family consisting of nine processors ranging from eight cores and 16 threads through to 32 cores and 64 threads. Targetting the meat of the server and premium workstation market means that two Epyc processors can be run on a single board, or 2P in server parlance, offering up to 128 threads in a top-level configuration. AMD's research indicates that 95 per cent of x86-based servers fall into the 1P-2P category. The meagre core count of the Epyc 7251 processor is deliberate in two ways: it provides the cheapest path to these server processors and is designed for systems where the per-core cost of licensing relevant software is a budgetary concern. Rising up the stack brings with it the goodness of more cores/threads and speeds. Epyc 7281, 7301 and 7351 all share the same 16C/32T topology but run at different base and boost speeds as well as different power budgets. The reason for the latter, explained AMD's Scott Aylor, is down to the speed of the memory controller resident within the processor, as operating it at DDR4-2666 consumes more power than, say, DDR4-2400, hence the 180W and 155W ratings, respectively. We can conjecture that AMD is binning Zen cores at the wafer level in order to determine those with the best frequency-to-voltage characteristics. These are then kept for Epyc chips, enabling up to 32 cores to run at solid speeds whilst the whole package consumes less than 200W. The fastest, and most expensive, trio all house 32 cores in what is known as a multi-chip module. Again, final speeds dictate pricing and comparison against extant Intel Xeon processors. AMD's research indicates that 25 per cent of servers ship with just a single CPU in situ, so for markets where only a single processors is necessary - and workstation and entry-level server springs to mind - AMD is also offering a trio of Epyc chips that are fused at the factory to work as uniprocessors. The Epyc 7551P, 7401P and 7351P are otherwise identical to their 2P-capable brethren though are priced lower in order to gain market share in these environments. Every Epyc processor uses an SP3 LGA socket - where the pins are on the motherboard, not the processor, contrary to desktop Ryzen - and AMD has confirmed that two future, improved Epyc models, codenamed Rome and Milan, will also maintain socket compatibility for simpler upgrading. Though we call them processors, it would be more accurate to refer to Epyc as an SoC (system on chip) as each package integrates all of the IO functionality and memory controllers. Speaking of memory, a fully-populated Epyc chip can handle two Dimms for each of its eight channels, so 16 in total, or 2,048GB (2TB) when using 128GB sticks. We'll see just how the motherboard guys get around creating standard-sized boards for two massive Epyc processors, a potential 32 Dimms for memory, and numerous slots for IO. Architecture, Implementation Epyc is based on the Zen architecture that is also the foundation for the Ryzen desktop CPUs. As you may know, those parts top out eight cores and 16 threads, with the assumed, upcoming Threadripper CPU doubling that count, though given the socket change, server-optimised Epyc and client Ryzen 7/5/3 chips will not be interoperable. AMD has since confirmed that, from a motherboard perspective, Threadripper and Epyc are also incompatible. That isn't to say they don't have significant commonality. The core building blocks behind Epyc use the same Zen core as Ryzen, so it's worth refreshing your knowledge by heading over to our introductory article right over here. And if you want a simpler eye chart to see how the Zen core compares with previous Bulldozer and Intel's Broadwell, feast your peepers on this slide. What's ostensibly different is how Epyc is distinct from an implementation point of view, especially as the core count scales up to 32 in the premier parts. Let's go from the outside in and start with IO first. Lots of IO Every Epyc processor possesses 128 lanes of IO, compared with 40 lanes for the latest Broadwell-based Xeons. This means that one can drive a huge number of eclectic devices from a single chip. For example, 64 lanes can be used for, say, four full-bandwidth graphics accelerators, you can chuck another 32-odd lanes for premium storage options, and so on. Intel's current generation cannot touch this amount of IO, clearly, but AMD's advantage isn't as huge as the base numbers would suggest, as a number of lanes would be reserved for base connectivity such as networking, Sata, etc. Intel encounters the same problem but gets around it by adding 20 or so lanes from its PCH 'southbridge'. The end result, still, is that Epyc enjoys a real-world 2x IO advantage in a 1P environment. That advantage means that fewer on-motherboard switches need to be used to expand lane counts, thus simplifying motherboard design and potentially lowering cost. These IO lanes can be used for either PCIe (8Gbps), Sata (6Gbps), or grouped together for Infinity Fabric as a chip-to-chip interconnect in a 2P system. In that case, each processor reserves the equivalent of 64 PCIe lanes to connected to one another (128 in total, therefore), hence reducing the potential amount of IO from 256 to the same 128 lanes, as shown in the above picture. Having heaps of IO being fed in and out of the processor inevitably puts strains on intra-chip and memory bandwidth, so a balanced design needs lots of both to ensure that IO doesn't become a bottleneck. Moving into the chip - the need for four dies for all Epyc CPUs Here is a simplistic view of an Epyc chip, comprised of four dies in a multi-chip module. It is important to understand that all Epyc chips, regardless of the number of stated cores, are built this way. Making it easy to understand, each of the four dies holds the equivalent of a Ryzen 7 processor. This means two CCX units - each holding four cores and an associated L3 cache - are connected to one another via intra-chip Infinity Fabric. Each two-CCX die has its own, individual dual-channel memory controllers. Adding all this up means that a fully-populated Epyc chip has eight CCXes, 32 cores, and an aggregate of eight-channel memory run at a maximum of DDR4-2666 with one Dimm per channel and DDR4-2400 with two Dimms per channel. As memory bandwidth is key to solid performance in the datacentre and only two channels are connected to each die, the eight- and 16-core Epyc chips have to use all four dies. Reinforcing what we said above, this also means that all Epyc chips share the same silicon topology - there is no way to get the required level of bandwidth in an MCM setup than by going down this road. As well as intra-die CCXes connected via Infinity Fabric, each die in turn is also connected to every other one via that Infinity Fabric, operating at 42GB/s bi-directionally, adding up to 170.4GB across four dies. Remember that number. Now let's see if Epyc is a balanced MCM design by looking at effective memory bandwidth and also IO speed. There's a total 170.4GB/s of aggregated memory bandwidth, too, as each of the eight channels, operating at a peak 2,666MT/s, can shift this amount into the chip. Note that this is for all dies going full chat, as technically each one only has access to two memory channels. Looking at a 2P system, the inter-chip Infinity Fabric offers up a potential 152GB/s between processors, intimating that all the theoretical numbers stack up, that is, show no obvious signs of bottlenecking. There may be some traffic IO-to-memory bottlenecks in a 1P environment where almost all of the 128 PCIe 3.0 runs are used for super-fast storage; each die has 64GB/s of PCIe bandwidth and, as we have seen, 'only' 42.6GB/s of memory transfers available. Point is, building an efficient MCM chip with lots of IO hanging off it also requires tonnes of memory bandwidth and lots of intra- and inter-die bandwidth, as well inter-chip speed. Epyc would not have been possible without Infinity Fabric there to hook it all up. Power optimisations Thinking back to the Zen architecture itself, AMD has endowed it with thousands of sensors that monitor temperature, voltage, power and frequency. Each of these is plumbed into algorithms that then set the optimum balance between frequency and speed at any given point in time. This is usual for modern processors, but where AMD says it differs is with respect to the granularity and speed of the on-the-fly changes. The reason we mention this is because such precise adjustments are far more valuable in a server space where shaving off a few watts here and there from a rack can add up to significant power savings across the datacentre, because even with all the other components present in a box - memory, NIC(s), HDD, fans, etc. - the CPU(s), understandably, consume over 50 per cent of the total power of a standard, GPU-less 2P server. Therefore, each of the up to 32 cores within Epyc has its own regulator that governs voltage for a given frequency. As you would expect, silicon doesn't have perfectly even characteristics across the die, meaning that some cores will require higher voltage than others to run at a particular speed. Using the latest adaptive voltage frequency scaling (AVFS) AMD reckons that each core's voltage can be tuned to within 2mV for the desired frequency, or with more granularity than the regulator itself allows. Interestingly, the TDPs quoted on the first page, ranging from 120W through to 180W, are configurable through the BIOS for an OEM with enough knowledge of cooling. Take the top-bin Epyc 7601 as an example. Where power consumption is less of a concern, that chip can be hiked to 200W with the extra energy driven towards higher speeds and voltage. Of course, the gains will be limited, because AMD already has an optimum speed/voltage curve, but it's possible to eke out that bit more performance. The same chip can be driven at a lower voltage/speed in order to increase the performance-per-watt metric in power-constrained environments, too, and this is why each processor has a range. Final boost speeds will depend upon just how this configurability is adopted, but you can have a reasonable range of curves for each chip. Remember that aggregate 128-link Infinity Fabric interconnect between chips in a 2P environment? Lighting up 150GB/s of data traffic doesn't help with respect to power consumption, understandably so in cases where the workload is far more bound by compute, this link dynamically reduces speed and voltage in order to save power and then reinvests it into more per-chip compute. It's an obvious way of ensuring that each watt is used sensibly. Security No exposition of a modern server processor would be complete without touching upon security. AMD adds in an ARM-based Cortex-A5 Secure Processor within the silicon of every Zen core. This little chip's job is to provide hardware-based support for two new technologies called Secure Memory Encryption (SME) and Secure Encrypted Virtualisation (SEV). SME offers real-time memory encryption at boot time. It works by marking pages of memory as encrypted through the page-table entries. What this means is that any kind of memory can be AES-encypted to mitigate against physical memory attacks. The memory isn't hidden in any meaningful way, of course, but any snooping or accesses will show it as encrypted. A single encryption key is generated and stored on chip. AMD says that such encryption, run via a couple of AES engines, causes minimal access-latency increases. Leveraging the increasing number of cores and threads within a modern server means that multiple virtual machines can be run on it. Software known as hypervisors emulate the hardware and enable these virtual machines to function. So, for example, 'your' remote server may be one of a number of servers, virtualised, and run on a single physical machine in the cloud. Ensuring that VMs are protected from compromised hypervisors is becoming increasingly important. This is where the SEV technology comes in, according to AMD. SEV encrypts parts of the memory shared by virtual machines - this is your 'part' of the server - and issues a unique key to each VM. The point is, compromised hypervisors know that multiple guest VMs are running but are not able to access their contents because their memory is cryptographically isolated. Sakyčiau neblogi CPU bus servams iš AMD su 128 PCIe lanes Matau vaizdą kaip tas CPU dirbtu su 8 vaizdo kortom su PCIe x16 Bet nors pati silpniausia paėmus CPU vistiek turės 128 PCIe lanes palaikimą (įdomu kokia kaina bus)
  13. Pas mane kambarį 4 naudoja 5 Ghz ir vienas 2,4 ghz Ir kituose kambariuose jau naudoja 2,4 ghz dar 6 aparatai
  14. šiaip jau senai buvo ant ps+ tokios nuolaidos, ai buvo bet tik 35£ bet ir tai buvo amazon
  15. @gedunex paėmiau metus 1, paskui pasitikrinau iki kada galios Nu iki sekančio renewal 20/12/2018
  16. @Dowydas neatsibosta man žiūrėti juos, palyginus su kitais žaidimais tai mgs 4 geriauses pagal cuts centras, kai ji esu perėjas daugybė kartu, tai nežiūrint cutscenu tai rekordas buvo apie 3h
  17. Max turėjau 25men ps+ Kai buvo akcija 12men pardavinėjo po 20£ bet čia buvo labai senai, man atrodo kai tik atsirado ps+ po kokiu 6men buvo tokia akcija game uk eshope tokia akcija nebloga buvo Tai iskart imiau 2metams
  18. tai kad užtenka nueiti į nustatimus ir tuo baigta, pvz ant savo moniko atsisiunčiau profili spalvu tai taip gerai pagerino vaizdą ir spalvas, kai jau keičiau profili tai skirtumas matėsi kaip juoda ant balto