4K does seem excessive, particularly since I've already reached the age where I can't stare at small fonts all day anymore. This monitor is considered a "2K" (3440x1440) and I just need to fit a lot of stuff on the screen at once.
What are you using the screen for?
I don't invest much in screens. I usually just get a good deal in second hand ones which suffice to ssh into uncensored and to do offimatics. You can get quite nice screens for 10 bucks if you go second hand.
But then I am a broke dude so...
I don't invest much in screens. I usually just get a good deal in second hand ones which suffice to ssh into uncensored and to do offimatics. You can get quite nice screens for 10 bucks if you go second hand.
But then I am a broke dude so...
Well there's the thing ... I am a data center architect, so I've got a lot
on my screen all day long. I do a lot of network diagrams, so being able
to have a big Visio on the screen while I have documentation and/or terminals
going at the same time.
I have the monitor set up and am looking at it for the first time right now.
And I can definitely say 2K is optimal here. Enough resolution to be readable at small font sizes, but not ridiculously dense.
I have the monitor set up and am looking at it for the first time right now.
And I can definitely say 2K is optimal here. Enough resolution to be readable at small font sizes, but not ridiculously dense.
4k requires jacking up your DPI setting so that fonts render larger. There's little benefit to those extra pixels besides the economic stimulus of selling more DisplayPort cables (HDMI connections will usually force 30Hz refresh.)
"Intels high demand has been of particular note since mid-2018. Since the
discovery of hardware vulnerabilities such as Spectre and Meltdown, and the
fixes that reduced overall performance of a large number of the installed
server base, many of Intel customers have been increasing the size of their
server deployments in order to re-match their original capacity. This issue
caused a sharp up-tick in demand of Intel processors, and Intel has driven
newer architectures that try to minimise those performance deficits (with
an overall performance uplift when the new architecture is factored in). As
a result, Intel moved some of its fabrication capacity away from its future
10nm process and back onto its 14nm in order to meet demand."
--https://www.anandtech.com/show/14909/intel-supply-in-q4-output-capacity-up-supplydemand-still-high
--https://www.anandtech.com/show/14909/intel-supply-in-q4-output-capacity-up-supplydemand-still-high
Ugh. So they built a product with a bad flaw in it, and in response, people
are buying more product from them, increasing their revenues while they revert
back to an older, crappier version of the product?
The way the tech industry rewards shittiness is appalling sometimes.
The way the tech industry rewards shittiness is appalling sometimes.
I dunno... I guess TSMC is getting adequate yield out of their 7nm, but probably most of that supply is going to Apple for the iPhone 11 Pro. And those are small chips, which are easier to manufacture.
So it makes sense that Intel is sticking to the better-yielding process to supply industry demand. All very disappointing, though.
It'll be interesting to see how this all plays out over the next 6 to 18 months. AMD's strategy of building multiple-small-chip packages on TSMC 7nm is starting to look smart. But I wonder if AMD/TSMC will really be able to ramp up supply enough to gain market share.
https://www.tomshardware.com/news/apple-increasing-iphone-11-production-a13-7nm-tsmc-amd,40553.html
"7nm Supply Showdown: AMD, Nvidia May Fight for Scraps as Apple Reportedly Ups A13 Production"
Intel (and AMD) shortages will continue
https://www.anandtech.com/show/15031/intel-boosts-14nm-capacity-25-in-2019-but-shortages-will-persist-in-q4
Pretty much, right? This is the highest-volume product these days, and it's also the one that can be manufactured with the best yields.
I would probably be able to make use of some of the ARM-based hardware that's showing up on AWS, but I just don't want to be bothered setting up the toolchain...
When I made the decision to keep all of my personal computing needs off of my work computer, I started with a Raspberry Pi. Then I moved to an Intel NUC that Ragnar gave me a few months ago, and that's what I'm using now. And now that I have the latest Kubuntu running on my big LG ultrawide monitor, it looks soooooooo good that it made me decide...
I want a computer.
A real computer. A "main" computer. One with a nice fast processor and room for several disks so I can retire the old laptop with external drives hanging off it in the garage that I'm using as a NAS. One that can sit on my desk and be THE computer. One that I can mount an LCD onto the front panel to display the date and time from the attached GPS puck because it's also my NTP server.
It's been too long.
What's the motivation to do that? Do they charge less per unit of compute
because it consumes less power?
Well, the chips are also cheaper and simpler, right? I haven't looked at the pricing, but it stands to reason that ARM servers should be cheaper per instance-hour. AWS even offers AMD-based servers that are cheaper per instance-hour than the Intel equivalent. It's not the most recent product from AMD, so it's noted that you don't get the same level of CPU performance. But some applications are more I/O bound, and don't typically *need* the full CPU performance.
I have a small fleet of machines that definitely falls into that category. The workloads are very short-duration spikey and our CPUs are spending the vast majority of the time idle.
Coincidentally, this just popped up in the news:
https://www.anandtech.com/show/15181/aws-designs-32core-arm-cpu-for-cloud-servers
Yeah. Although serverless is the way to go where you can. For static content.
At this very moment I'm working on retiring a couple of nginx instances (that only serve static) and replacing them with S3+CloudFront. This should be cheaper *and* scale way better, if I got the estimate remotely right.
I have a 10,000 square foot serverless data center in my backyard. It uses
no electricity and doesn't require a network. I am happy to report that I've
achieved 100.00% uptime, and am maintaining a 1.0 power factor. It's also
"green" (at least in the summer).
Patience, grasshopper. Since my need is not urgent I intend to savor the
planning and building. I also intend to hunt around some discard piles for
non-obsolete components such as the case and power supply, so that I can spend
more of my budget on things like CPU and RAM. And maybe a watercooling line
that runs outside to the creek in my front yard. (Ok that last bit was fantasy
but it's fun to think about.)