DITF March 15, 2024 Kick-off meeting

Mar 15, 2024

 Attendees

Here: Christian Ward, Jill Strykowski, Corinne Ornelas, Can Li, Tim Fluhr, Heather Cribbs

Absent: Christina Hennessey, Dolly Lopez

Agenda

Goal: Determine starting topics and plans

Topic

Notes

Topic

Notes

Views and relevancy recommendations

Looked over table created by CH (See spreadsheet)

CW: how do we determine what relevancy is “good” for our users?
-- JS: suggest we develop and use UX personas; make recommendations based on those personas and then libraries can create and update View settings based on which personas they think are most relevant to their campus user-group

CL, HC: when doing testing, be aware of other factors such as timing of indexing, CDI updates, availability settings

Look at University of Washing(?) methodology for collecting example search data

CW, CO: look at common issues from LibChat/Reference sessions that should be focused on as common problems and/or can be used as test cases

HC: Break up chart by columns and address as different issues

CL, CW: do others look at the json and pnx code? Is it helpful in understanding?

For Group: How do we develop rubrics for both testing and persona creation?

For CH: Can we ask each campus to provide quick explanation of what changes they recall making to view settings and ranking and what their reasoning was?

CL emailed: I’d like to briefly share what I recently learned and have been experimenting with adjusting relevancy ranking. With Primo search API, I can retrieve JSON records with the PNX record score. Records are listed with the score from the highest to the lowest, and the order matches what we see on OneSearch UI. These scores are different from scores we see in a browser. When adjusting a boost factor, I can see how the score changes correspondingly. This provides a predictable pattern, based on which I can then know what boost factor would be most appropriate for the desired ranking.

FRBR learning and improvements

Group agreed this topic should be reviewed a revisited for whole CSU

Uncertain if it will improve result as desired, but would be good to know

Title/portfolio matching in Primo

For CH, JS: do we have any updates in ExLibris tickets or release notes on this topic?

Check in with Nikki DeMoville - ask if she can share her background information, examples and solution/development requests

Resource types across system

JS: Can I give us the crazy task of documenting complete mappings for : Resource type vs. Material type vs. Physical item type in Alma → Primo → CDI → GTI → digital collections, etc.

Is ExLibris going to do a normalization project on this data like they are doing with CDI subjects?

Secondary resource type work

For CH/JS: check with RMFC

OA resource tagging for Primo filter

For CH, JS: do we have any updates in ExLibris tickets or release notes on this topic?

Not otherwise discussed

Enhancement request to improve course name/ID as a filter in Primo

Currently indexes and sort as full string – should sort as Alpha and then as number
did not discuss

Pendo???

did not discuss

Linked data and AI??

did not discuss

 Action items

Jill to create shared drive for depositing documentation
Group members to assign themselves to sub-groups (and start studying things!)
Relevancy ranking + other view settings
FRBR/Dedupe
Resource, material, item type mapping

For CH/JS:

Title/portfolio matching: do we have any updates in ExLibris tickets or release notes?
OA tagging for Primo filter: do we have any updates in ExLibris tickets or release notes?
Secondary resource types: do we have any updates or to-dos from RMFC?

Recommended reading

Measuring and Predicting Search Engine Users' Satisfaction

Dan, O., & Davison, B. (2016). Measuring and Predicting Search Engine Users’ Satisfaction. ACM Computing Surveys (CSUR), 49(1), 1–35. https://doi.org/10.1145/2893486

Web Search Engines - Not Yet a Reliable Replacement for Bibliographic Databases

Hughes, E. (2018). Web Search Engines - Not Yet a Reliable Replacement for Bibliographic Databases. Evidence Based Library and Information Practice, 13(3), 85–87. https://doi.org/10.18438/eblip29378

Lessons Learned: A Primo Usability Study

Brett, K., Lierman, A., & Turner, C. (2016). Lessons Learned: A Primo Usability Study. Information Technology and Libraries, 35(1), 7–25. https://doi.org/10.6017/ital.v35i1.8965

A Framework for Measuring Relevancy in Discovery Environments

Galbreath, B. L., Merrill, A., & Johnson, C. M. (2021). A Framework for Measuring Relevancy in Discovery Environments. Information Technology and Libraries, 40(2), 1–17. https://doi.org/10.6017/ital.v40i2.12835