Are you looking for a tool that can diff legal documents line by line or clause by clause? If the latter I’d bet an LLM with a large context size could do a pretty good job, especially if you used a script (or another pass through the LLM) to break them down into like sections so that could just compare e.g. all Controlling Law sections with each other and all IP Indemnification sections with each other.
Now that I think about it, tuning the prompt (and keeping the temperature very low, like 0) you could probably get it to return everything from proper diffs to summaries of conceptual differences. And it could definitely do multiples at once if you were to break them into like pieces ahead of time.
From the slide deck (which is well worth a read IMO), "The Trust Index is the average percent trust in NGOs, business, government and media. ". The same deck indicates that government is seen as, “as Far Less Competent and Ethical than Business.” So what this really tells me is that business (as a whole) is doing a FAR better job of marketing/PR than governments are, which is to be expected I suppose.
In UNIX-y systems ./
is your current local directory, so if I was in /usr/home/will
and I extracted your file I would expect any file that was like ./foo.txt
to be extracted to /usr/home/will/foo.txt
, and if there were files like ./testar/bar.txt
, they would be extracted to a new directory /usr/home/will/testar/bar.txt
– or is that not what you’re talking about?
One thing I never thought about is how the longer a scientist is out of work (due to war, political instability, etc) the more likely it is that they’ll be lost to science (in other words, the less likely they are to return to scientific work when they can). That could potentially be like a chilling effect on steroids.
You can kinda do it with Google Customizabe Search Engine, which is basically a thin wrapper around Google. In a regular Google search you can use syntax like -site:ignorethisdomain.com to exclude specific domains (i do this with Pinterest whenever searching for images, for example). But manually typing in a large list of black listed domains would be tedious so instead you can set up a CSE with everybody you want to ignore and then just use the special URL as your search engine.
draw.io is a capable web-based flowcharting program. Source code is on github but I’ve never tried locally hosting.
For 100TB it’s worth looking into a dedicated storage server – there are tons of them available for cheap. labgopher makes it easy to track sales on ebay by price/storage/ram/whatever.
This is a bigger problem the higher up the foodchain you go, so giving your cat Bluefin is worse (not to mention much more expensive) than the common stuff (albacore or light tuna) that comes in a can. And of course it’s even worse for humans, for the same reason - and we live longer so the heavy metals have more time to accumulate.
I think this is it exactly, and in fact I found a Science Daily article that explains the cleverness of it (your assumption about the time scale is correct, and they have a clever arrangement of diodes that let you kind of “pump” the charge out). They specifically mention not violating the 2nd Law too :)
It’s more like a generator that uses ambient heat as the “battery”. With previous systems you could only extract useful work from heat if you had a heat gradient (e.g. one area that’s hotter than another). With this invention the innovation is that graphene’s unique combination of thinness and conductivity basically let you convert the brownian “heat” of the substance itself (not the environment) into electricity.
The Wreck of the Edmund Fitzgerald
Don’t go boating in a storm, folks.
As an anecdote, I work at a midsized software company as a product manager. I have an international team of about 20 that I manage from home (full-time remote). Overall there is some loss of speed and agility versus having a full-time in-office staff. I’m not a fan of trying to quantify productivity per se, but for things like estimations and deviations there’s no question that in my environment at least, things move a little slower and take a little longer. Now personally, the fact that we can hire engineers anywhere across the globe (including in LCOL areas), don’t have to pay rent and related fees, and that some of the best engineers specifically want full-time remote more than outweighs the reduced agility (putting aside all of the other potential QOL benefits) – and if needed, some of the savings from reduced rent and salaries could be used to expand the team anyway. Thankfully my management team agrees and has continued to pursue a remote/hybrid environment. But for those places that value speed and agility most it could be a bit of a problem.
I just watched that video somewhat recently and couldn’t understand that quote. There are a number of MOND models that literally don’t involve dark matter at all – no new particles added, no unexplained masses needed. So in that case, wouldn’t “how we combine them” just be “set dark matter to 0 and use this different set of equations to solve for gravity in certain circumstances”?
I worked in a field that managed a lot of technology in retail stores. The big ones know everything about you, it’s just astonishing. At the time (around 15 years ago) there was very little oversight, but also most CIOs were inept and couldn’t really make the data sing and dance. Today that is very much no longer true, and it’s almost too easy to build a comprehensive profile of an “anonymous” guest and then attach it to their personally identifiable information, all without their consent or knowledge.
I don’t see it in this particular paper, but I’ve seen other semi-serious proposals where you also build a partially reflective “mirror” in a fixed position away from the habitat so that it always gets indirect sunlight (and then I guess you just pull the curtains closed when it’s time to sleep :) )
Maybe true, but even at $3500 the Vision Pro would be about the cheapest thing in the operating theater anyway.