{"id":5565,"date":"2023-12-18T18:58:03","date_gmt":"2023-12-18T09:58:03","guid":{"rendered":"https:\/\/mytlab.org\/?page_id=5565"},"modified":"2023-12-22T10:22:28","modified_gmt":"2023-12-22T01:22:28","slug":"theme","status":"publish","type":"page","link":"https:\/\/mytlab.org\/en\/theme\/","title":{"rendered":"Research themes"},"content":{"rendered":"<p><a id=\"top\"><\/a><\/p>\n<hr>\n<p>We have five research themes.<\/p>\n<div><a href=\"#at\">Assistive technology<\/a> \/ <a href=\"#se\">Smart environment<\/a> \/ <a href=\"#ba\">Behavior analysis<\/a> \/ <a href=\"#cs\">Communication system<\/a> \/ <a href=\"#fa\">Friendly AI<\/a><\/div>\n<p><a id=\"at\"><\/a><\/p>\n<hr>\n<p><span class=\"highlight\">Assistive technology<\/span><\/p>\n<h5>Research theme overview<\/h5>\n<p>We research technologies aimed at enabling individuals, such as those with disabilities and the elderly, who face physical and social challenges, to engage in activities <span class=\"highlight_normal\">more conveniently and comfortably<\/span>. Our work includes the development of a system that automatically generates accessibility maps based on people&#8217;s walking activities and their engagement in location-based interactive games, and a system that facilitates communication among individuals with visual and auditory disabilities.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/assistive_technology_en-1024x785.png\" alt=\"\" width=\"500\" class=\"alignnone size-large wp-image-5570\" srcset=\"https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/assistive_technology_en-1024x785.png 1024w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/assistive_technology_en-300x230.png 300w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/assistive_technology_en-768x589.png 768w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/assistive_technology_en.png 1043w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<h5>Projects<\/h5>\n<ul>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/bscanner\/\">BScanner<\/a>: Crowdsourcing platform for constructing accessibility maps supporting multiple participation modes<\/li>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/vr-barrier-simulator\/\">BSim<\/a>: Low-cost and high-reality wheelchair simulator<\/li>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/tapmessenger\/\">Tap Messenger<\/a>: Interface for people with disabilities enabling communication only by tapping<\/li>\n<\/ul>\n<p><a id=\"se\"><\/a><br \/>\n<a href=\"#top\">Back to top<\/a><\/p>\n<hr>\n<p><span class=\"highlight\">Smart environment<\/span><\/p>\n<h5>Research theme overview<\/h5>\n<p>Combining AI and IoT, we research technologies that enable the system to provide appropriate assistance through <span class=\"highlight_normal\">simple user operations or unconscious actions by individuals<\/span>. Our work includes developing smart home systems, featuring innovative drawers that can locate stored items and intelligent doors that can provide weather forecasts. We are also developing paper media systems that display relevant information using torn edges or arrangements of text on paper.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/smart_environment_en-1024x785.png\" alt=\"\" width=\"500\" class=\"alignnone size-large wp-image-5574\" srcset=\"https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/smart_environment_en-1024x785.png 1024w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/smart_environment_en-300x230.png 300w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/smart_environment_en-768x589.png 768w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/smart_environment_en.png 1043w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<h5>Projects<\/h5>\n<ul>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/findrawers\/\">FINDrawers<\/a>: Drawers capable of locating stored items<\/li>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/xseal\/\">xSeal<\/a>: Compact device capable of making pieces of furniture intelligent<\/li>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/mimiconne\/\">mimiconne<\/a>: Digital signage enabling content selection by mimicking movements<\/li>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/tornedge\/\">Tornedge<\/a>: Method for transferring electronic information by tearing and handing over paper<\/li>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/kappan\/\">Kappan<\/a>: Book location identification technology connecting physical books and digital media<\/li>\n<\/ul>\n<p><a id=\"ba\"><\/a><br \/>\n<a href=\"#top\">Back to top<\/a><\/p>\n<hr>\n<p><span class=\"highlight\">Behavior analysis<\/span><\/p>\n<h5>Research theme overview<\/h5>\n<p>Using cameras and sensors, we analyze and model human behavioral patterns to explore <span class=\"highlight_normal\">more efficient methods of communication<\/span>. By conducting multimodal analysis of facial expressions, voice, and spoken content, we scientifically identify the requirements for effective praise. Additionally, we detect scenes of heightened discussion during communication based on the analysis of brainwave data.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/behavior_analysis_en-1024x785.png\" alt=\"\" width=\"500\" class=\"alignnone size-large wp-image-5571\" srcset=\"https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/behavior_analysis_en-1024x785.png 1024w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/behavior_analysis_en-300x230.png 300w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/behavior_analysis_en-768x589.png 768w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/behavior_analysis_en.png 1043w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<h5>Projects<\/h5>\n<ul>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/praiser\/\">Praiser<\/a>: Modeling of effective praising behaviors<\/li>\n<li><a href=\"https:\/\/mytlab.org\/en\/ms-analyzer\/\">MS-Analyzer<\/a>: Communication analysis based on thinking state estimation<\/li>\n<\/ul>\n<p><a id=\"cs\"><\/a><br \/>\n<a href=\"#top\">Back to top<\/a><\/p>\n<hr>\n<p><span class=\"highlight\">Communication system<\/span><\/p>\n<h5>Research theme overview<\/h5>\n<p>We study communication systems that take into account delicate user psychology, focusing on <span class=\"highlight_normal\">minimizing feelings of embarrassment and guilt<\/span>. We are developing co-working space systems that facilitate gradual self-disclosure even among strangers, and video conferencing systems where the degree of facial blurring changes according to the level of familiarity.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/communication_system_en-1024x785.png\" alt=\"\" width=\"500\" class=\"alignnone size-large wp-image-5572\" srcset=\"https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/communication_system_en-1024x785.png 1024w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/communication_system_en-300x230.png 300w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/communication_system_en-768x589.png 768w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/communication_system_en.png 1043w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<h5>Projects<\/h5>\n<ul>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/hazy-rooms\/\">HazyRooms<\/a>: Video conferencing employing blurring to adjust anonymity levels progressively<\/li>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/meeting_viz\/\">Meeting Viz<\/a>: Method for visualizing conversation dynamics in remote meetings<\/li>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/ao_mediator\/\">A\/O Mediator<\/a>: Approach for gradually altering anonymity levels in shared space communication<\/li>\n<\/ul>\n<p><a id=\"fa\"><\/a><br \/>\n<a href=\"#top\">Back to top<\/a><\/p>\n<hr>\n<p><span class=\"highlight\">Friendly AI<\/span><\/p>\n<h5>Research theme overview<\/h5>\n<p>We research methods to realize AI that is not necessarily fast or accurate, but <span class=\"highlight_normal\">AI that humans find easy to feel familiarity with<\/span>. We are developing conversational agents that employ humor techniques such as boke and tsukkomi based on the art of Manzai, as well as agents that express empathy through ambiguous movements.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/friendly_ai_en-1024x785.png\" alt=\"\" width=\"500\" class=\"alignnone size-large wp-image-5573\" srcset=\"https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/friendly_ai_en-1024x785.png 1024w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/friendly_ai_en-300x230.png 300w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/friendly_ai_en-768x589.png 768w, https:\/\/mytlab.org\/wp\/wp-content\/uploads\/2023\/12\/friendly_ai_en.png 1043w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<h5>Projects<\/h5>\n<ul>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/joke-agent\/\">Joker<\/a>: Conversational agents performing humor techniques of boke and tsukkomi<\/li>\n<li><a href=\"https:\/\/mytlab.org\/en\/project\/vague-agent\/\">Vague Agent<\/a>: Agents expressing empathy through movements with a high degree of ambiguity<\/li>\n<\/ul>\n<p><a href=\"#top\">Back to top<\/a><\/p>\n<hr>\n<p><a class=\"button-small blue_green rounded3\" href=\"https:\/\/mytlab.org\/en\/project\/\">Projects<\/a><\/p>\n<hr>\n","protected":false},"excerpt":{"rendered":"<p>We have five research themes. Assistive technology \/ Smart environment \/ Behavior analysis \/ Communication sys [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":"","_locale":"en_US","_original_post":"https:\/\/mytlab.org\/?page_id=4073"},"_links":{"self":[{"href":"https:\/\/mytlab.org\/wp-json\/wp\/v2\/pages\/5565"}],"collection":[{"href":"https:\/\/mytlab.org\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/mytlab.org\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/mytlab.org\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mytlab.org\/wp-json\/wp\/v2\/comments?post=5565"}],"version-history":[{"count":25,"href":"https:\/\/mytlab.org\/wp-json\/wp\/v2\/pages\/5565\/revisions"}],"predecessor-version":[{"id":5973,"href":"https:\/\/mytlab.org\/wp-json\/wp\/v2\/pages\/5565\/revisions\/5973"}],"wp:attachment":[{"href":"https:\/\/mytlab.org\/wp-json\/wp\/v2\/media?parent=5565"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}