Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create an "always safe / always non-sensitive" version of Openverse #4920

Open
zackkrida opened this issue Sep 12, 2024 · 2 comments
Open

Create an "always safe / always non-sensitive" version of Openverse #4920

zackkrida opened this issue Sep 12, 2024 · 2 comments
Labels
🕹 aspect: interface Concerns end-users' experience with the software 🌟 goal: addition Addition of new feature 🟩 priority: low Low priority and doesn't need to be rushed 🧱 stack: frontend Related to the Nuxt frontend

Comments

@zackkrida
Copy link
Member

zackkrida commented Sep 12, 2024

Problem

It seems likely that educational facilities using Openverse may want to restrict access to sensitive content on the platform. We've received feedback in the past relating to sensitive content from folks identifying as education-affiliated:

  • Students saying they saw sensitive results while required to use Openverse for school
  • Educators who have said they would use Openverse, but can't due to sensitive content concerns

Description

Create a version of Openverse where the sensitive content filter is "locked" and cannot be modified. This could be handled in a number of ways, that should be determined by doing some research into how institutional firewalls commonly work. Some ideas though could be derived by how Google handles this with functionality called "Lock SafeSearch for accounts, devices & networks you manage".

The tl;dr there is that they have network admins set a DNS CNAME record for forcesafesearch.google.com to www.google.com.

Alternatives

Additional context

One potential issue is the "readiness" of Openverse to introduce this functionality. Are our results sufficiently "safe" to make this assurance to users? Is there a safety threshold we could or should measure (some threshold of sensitive content reports, or something) we should determine first?

@zackkrida zackkrida added 🚦 status: awaiting triage Has not been triaged & therefore, not ready for work ✨ goal: improvement Improvement to an existing user-facing feature labels Sep 12, 2024
@obulat
Copy link
Contributor

obulat commented Sep 13, 2024

One potential issue is the "readiness" of Openverse to introduce this functionality. Are our results sufficiently "safe" to make this assurance to users? Is there a safety threshold we could or should measure (some threshold of sensitive content reports, or something) we should determine first?

Maybe this version should use the non-user-generated media at first, as we do for some integrations?

@obulat obulat added 🌟 goal: addition Addition of new feature 🕹 aspect: interface Concerns end-users' experience with the software 🧱 stack: frontend Related to the Nuxt frontend 🟩 priority: low Low priority and doesn't need to be rushed and removed 🚦 status: awaiting triage Has not been triaged & therefore, not ready for work ✨ goal: improvement Improvement to an existing user-facing feature labels Sep 16, 2024
@zackkrida
Copy link
Member Author

@obulat that's actually a phenomenal idea that would help get this "up and running" quite quickly! Cool 😎

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🕹 aspect: interface Concerns end-users' experience with the software 🌟 goal: addition Addition of new feature 🟩 priority: low Low priority and doesn't need to be rushed 🧱 stack: frontend Related to the Nuxt frontend
Projects
Status: 📋 Backlog
Development

No branches or pull requests

2 participants