• 7 Posts
  • 76 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle





  • Funny, I saw this to an extreme, a ways back.

    Someone posted for software help on some forum about something and… they described everything. I shit you not, their description was a determinate system in it of itself.

    CPU, GPU, SSD, ram, thermal fans, size measurements, age, resolution, price point, model, kernel version, installed package count, filesystem setup, update log, journalctl, dmesg, Xorg log, genome sequence. And the kicker?

    First comment:

    ok but i’m not sure what you’re asking. what’s the problem, exactly?

    No other comments.


  • Here’s something positive: precisely mentioning what they tried on a problem already!

    If someone’s stuck on a problem and defines what help they need, then I have no thoughts either way. It’s just a problem, and something to be helped through. Neutral.

    But if they describe what they did already, then I think “Wow, this person really put in some I-don’t-give-up effort! Nice work, bro!”



  • This isn’t a qu-- actually you heard that already (˃ . ˂˶)

    I can definitely attest to the culture, which is fresh air compared to a lot of networks (e.g. that Draw a Duck post is probably far beyond a lot of platforms’ capabilities/proclivities)

    I think some of it boils down to:

    • The Lemmy Algorithm. This is a big flaw with Reddit – people have the attention span for the first ten comments, and then subcomment upvotes halve (with decent std. dev – we aren’t Zipf’s Law devotees there) until invisibility. I don’t think my Reddit comments are even seen, let alone replied to. But here, new comments have a chance.
    • The sense of “mineness”. As another here said, there’s responsibility to raise your communities right, and another to interact (hence, variably lower hostility). I don’t post much but I respond a lot to the people who comment in them, because I feel that I have to contribute to keep this sanctum humanly alive.
    • At risk of sounding self-absorbed/elitist, the entry level. People are here because they were dissatisfied with the state of other sites, then made a jump; this is a sieve that to an extent increases the standard of sorting by new. (This has limitations of course, and it isn’t necessarily advocating for Lemmy to never be mainstream.)

    Just my conjectures ¯\_(ツ)_/¯



  • comment 2/2

    B. The Social Value of Privacy

    Some utilitarians like Etzioni frame society needs and individual needs as a dichotomy where society should usually win (p. 761 or pdf page 17). Others like Dewey thinks “individual rights are not trumps, but are protections by society from its intrusiveness” that should be measured in welfare, not utility. “Part of what makes a society a good place in which to live is the extent to which it allows people freedom from the intrusiveness of others” (p. 762 or pdf page 18). So, privacy can manifest in our right to not be intruded.

    Section IV. The problem with the “Nothing to Hide” argument

    A. Understanding the Many Dimensions of Privacy

    Privacy isn’t about hiding a wrong, concealment, or secrecy (p. 764 or pdf page 20).

    Being watched’s “chilling effects [i.e. getting scared into not doing something] harm society because, among other things, they reduce the range of viewpoints expressed and the degree of freedom with which to engage in political activity”; but even so, it’s kinda super hard to prove that a chilling effect happened so it’s easy for a Nothing to Hider to say that the NSA’s “limited surveillance of lawful activity will not chill behavior sufficiently to outweigh the security benefits” (p. 765 or pdf page 21). Personal damage from privacy is hard to prove by nature, but it still exists.

    If we use the taxonomy, we notice that the NSA thingamabob has:

    • Aggregation: if some mysterious guy Kafkaesquely compiles a crapton of data without any of your knowledge – with human bureaucratic “indifference, errors, abuses, frustration, and lack of transparency and accountability” – then they could pretty easily decide that they can guess what you might be wanting to hide or predict what People Like You might do later. Oopsie: it’s kind of hard to refute or hide a “future” behavior.
    • Exclusion: You have no idea what they’re doing or if it is CORRECT information. That’s a kind of due process problem and a power imbalance – the NSA is insulated from accountability even though they have hella power over citizens.
    • Secondary use: “The Administration said little about how long the data will be stored, how it will be used, and what it could be used for in the future. The potential future uses of any piece of personal information are vast, and without limits or accountability on how that information is used, it is hard for people to assess the dangers of the data being in the government’s control”

    But then the Nothing to Hide argument only focuses on one or two definitions but not others. So it’s unproductive.

    (p. 766-767 or pdf page 22-23)

    B. Understanding Structural Problems

    Privacy isn’t usually one big harm, like that one thing where Rebecca Schaefer and Amy Boyer were killed by a DMV-data-using stalker and database-company-using stalker respectively (p. 768 or pdf page 24); it’s closer to a bunch of minor things like how gradual pollution is.

    Airlines violated their privacy policies after 9/11 by giving the government a load of passenger info. Courts decided the alleged contractual damage wasn’t anything and rejected the contract claim. However, this breach of trust falls under the secondary use taxonomy thing and is a power imbalance in the social trust between corpo and individual: if the stated promise is meaningless, companies can do whatever they want with data – this is a structural harm even if it’s hard to prove your personal damages (p.769-770 or pdf page 25-26)

    There should be oversight – warrants need probable cause, wiretaps should be minimal and with judicial supervision – Bush oopsied here (p. 771 or pdf page 27).

    “Therefore, the security interest should not get weighed in its totality against the privacy interest. Rather, what should get weighed is the extent of marginal limitation on the effectiveness of a government information gathering or data mining program by imposing judicial oversight and minimization procedures. Only in cases where such procedures will completely impair the government program should the security interest be weighed in total, rather than in the marginal difference between an unencumbered program versus a limited one. Far too often, the balancing of privacy interests against security interests takes place in a manner that severely shortchanges the privacy interest while inflating the security interests. Such is the logic of the nothing to hide argument” (p. 771-772 or pdf page 27-28).

    Section V. Conclusion

    Nothing to Hide defines privacy too narrowly and ignores the other problems of surveillance and data mining.


  • le user generated summary (no gee-pee-tee was used in this process):

    comment 1/2

    Section I. Introduction

    skip :3

    Section II. The “Nothing to Hide” argument

    We expand the “nothing to hide” argument to a more compelling, defensible thesis. That way we can attack it more cleanly

    “The NSA surveillance, data mining, or other government information- gathering programs will result in the disclosure of particular pieces of information to a few government officials, or perhaps only to government computers. This very limited disclosure of the particular information involved is not likely to be threatening to the privacy of law-abiding citizens. Only those who are engaged in illegal activities have a reason to hide this information. Although there may be some cases in which the information might be sensitive or embarrassing to law-abiding citizens, the limited disclosure lessens the threat to privacy. Moreover, the security interest in detecting, investigating, and preventing terrorist attacks is very high and outweighs whatever minimal or moderate privacy interests law-abiding citizens may have in these particular pieces of information.” (p. 753, or pdf page 9)

    Section III. Conceptualizing Privacy

    A. A Pluralistic Conception of Privacy (aka “what’s the definition”)

    Privacy can’t be defined as intimate information (Social Security/religion isn’t “intimate”), or the right to be let alone (shoving someone and not leaving them alone isn’t a privacy violation), or 1984 Orwell surveillant chilling social control (your beverage use history isn’t social control) (p. 755-756 or pdf page 11-12).

    Privacy is kind of blobby so we define it as a taxonomy of similar stuff:

    • Information Collection
      • Surveillance
      • Interrogation
      • Main problem: “an activity by a person, business, or government entity creates harm by disrupting valuable activities of others” whether disruption is physical, emotional, chilling of socially beneficial behavior like free speech, or causing of power imbalances like executive branch power.
    • Information Processing
      • Aggregation
      • Identification
      • Insecurity: Information might be abused. You can think of ways ;)
      • Secondary Use
      • Exclusion: People have no access nor say in how their data is used
    • Information Dissemination
      • Breach of Confidentiality
      • Disclosure
      • Exposure
      • Increased Accessibility
      • Blackmail
      • Appropriation
      • Distortion
      • Main problem: How info can transfer or be threatened to transfer
    • Invasion
      • Intrusion
      • Decisional Interference
      • Main problem: Your decisions are regulated

    (p. 758-759, or pdf page 14-15)

    So privacy is a set of protections against a set of related problems (p. 763-764 or pdf page 19-20).