sdbus++-gendir: avoid duplicate filenames with --list-all

With the `--list-all` option, the program can end up emitting the
filename for output files multiple times.  This can cause issues with
typical use cases of utilizing sdbus++-gendir as part of meson, because
a plain usage of this tool may end up causing duplicated build actions.
In the conversion to meson in phosphor-dbus-interfaces it was noted that
the same markdown files could be installed twice.

Create a hash table to keep track of which filenames have been emitted
and prevent duplicate emission.

Signed-off-by: Patrick Williams <patrick@stwcx.xyz>
Change-Id: I1d8d69c6df3ec9b13acfe955f8cc3e81d8748033
diff --git a/tools/sdbus++-gendir b/tools/sdbus++-gendir
index 518d43b..921645b 100755
--- a/tools/sdbus++-gendir
+++ b/tools/sdbus++-gendir
@@ -76,6 +76,7 @@
     exit 1
 fi
 
+declare -A emitted_file_names
 # generate_single -- make a single call to sdbus++.
 #   $1: sdbus++ TYPE
 #   $2: sdbus++ PROCESS
@@ -94,20 +95,27 @@
         $sdbuspp $1 $2 $3 >> $outputdir/$4 &
     fi
 
-    # Always emit generated file name for foo-cpp and foo-header.
-    # Conditionally emit for everything else depending on $listall.
-    case "$2" in
-        *-cpp | *-header)
-            echo $outputdir/$4
-            ;;
+    # Emit filename as needed.
+    filename=$outputdir/$4
+    if [ "x1" != "x${emitted_file_names[$filename]}" ];
+    then
+        emitted_file_names[$filename]="1"
 
-        *)
-            if [ "xyes" == "x$listall" ];
-            then
-                echo $outputdir/$4
-            fi
-            ;;
-    esac
+        # Always emit generated file name for foo-cpp and foo-header.
+        # Conditionally emit for everything else depending on $listall.
+        case "$2" in
+            *-cpp | *-header)
+                echo $filename
+                ;;
+
+            *)
+                if [ "xyes" == "x$listall" ];
+                then
+                    echo $filename
+                fi
+                ;;
+        esac
+    fi
 
     # Ensure that no more than ${parallel} jobs are running at a time and if so
     # wait for at least one to finish.